2026-01-08 00:00:07.377731 | Job console starting 2026-01-08 00:00:07.419934 | Updating git repos 2026-01-08 00:00:07.555443 | Cloning repos into workspace 2026-01-08 00:00:07.874598 | Restoring repo states 2026-01-08 00:00:07.905357 | Merging changes 2026-01-08 00:00:07.905377 | Checking out repos 2026-01-08 00:00:08.359039 | Preparing playbooks 2026-01-08 00:00:09.530928 | Running Ansible setup 2026-01-08 00:00:18.099311 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-08 00:00:20.224430 | 2026-01-08 00:00:20.224560 | PLAY [Base pre] 2026-01-08 00:00:20.253980 | 2026-01-08 00:00:20.254123 | TASK [Setup log path fact] 2026-01-08 00:00:20.293823 | orchestrator | ok 2026-01-08 00:00:20.318928 | 2026-01-08 00:00:20.319064 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-08 00:00:20.368962 | orchestrator | ok 2026-01-08 00:00:20.392571 | 2026-01-08 00:00:20.392709 | TASK [emit-job-header : Print job information] 2026-01-08 00:00:20.487629 | # Job Information 2026-01-08 00:00:20.487834 | Ansible Version: 2.16.14 2026-01-08 00:00:20.487870 | Job: testbed-deploy-next-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-08 00:00:20.487903 | Pipeline: periodic-midnight 2026-01-08 00:00:20.487926 | Executor: 521e9411259a 2026-01-08 00:00:20.487946 | Triggered by: https://github.com/osism/testbed 2026-01-08 00:00:20.487968 | Event ID: 37eb9e01da0e4aae86dc584bcba8fe62 2026-01-08 00:00:20.502726 | 2026-01-08 00:00:20.510265 | LOOP [emit-job-header : Print node information] 2026-01-08 00:00:20.744582 | orchestrator | ok: 2026-01-08 00:00:20.744781 | orchestrator | # Node Information 2026-01-08 00:00:20.744851 | orchestrator | Inventory Hostname: orchestrator 2026-01-08 00:00:20.744878 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-08 00:00:20.744897 | orchestrator | Username: zuul-testbed03 2026-01-08 00:00:20.744919 | orchestrator | Distro: Debian 12.12 2026-01-08 00:00:20.744944 | orchestrator | Provider: static-testbed 2026-01-08 00:00:20.744962 | orchestrator | Region: 2026-01-08 00:00:20.744980 | orchestrator | Label: testbed-orchestrator 2026-01-08 00:00:20.744996 | orchestrator | Product Name: OpenStack Nova 2026-01-08 00:00:20.745018 | orchestrator | Interface IP: 81.163.193.140 2026-01-08 00:00:20.766073 | 2026-01-08 00:00:20.766193 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-08 00:00:21.950550 | orchestrator -> localhost | changed 2026-01-08 00:00:21.959316 | 2026-01-08 00:00:21.959436 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-08 00:00:24.550967 | orchestrator -> localhost | changed 2026-01-08 00:00:24.577937 | 2026-01-08 00:00:24.578057 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-08 00:00:25.477586 | orchestrator -> localhost | ok 2026-01-08 00:00:25.484411 | 2026-01-08 00:00:25.484521 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-08 00:00:25.523669 | orchestrator | ok 2026-01-08 00:00:25.542079 | orchestrator | included: /var/lib/zuul/builds/20443b1ab3f74324ba8bcbc6fdfc2e06/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-08 00:00:25.561158 | 2026-01-08 00:00:25.561265 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-08 00:00:28.482410 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-08 00:00:28.482692 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/20443b1ab3f74324ba8bcbc6fdfc2e06/work/20443b1ab3f74324ba8bcbc6fdfc2e06_id_rsa 2026-01-08 00:00:28.482733 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/20443b1ab3f74324ba8bcbc6fdfc2e06/work/20443b1ab3f74324ba8bcbc6fdfc2e06_id_rsa.pub 2026-01-08 00:00:28.482759 | orchestrator -> localhost | The key fingerprint is: 2026-01-08 00:00:28.482783 | orchestrator -> localhost | SHA256:3BsGra4Bw4bil3QyyADjjLeRp2jECV1mLYngSsOGEnU zuul-build-sshkey 2026-01-08 00:00:28.482806 | orchestrator -> localhost | The key's randomart image is: 2026-01-08 00:00:28.482856 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-08 00:00:28.482880 | orchestrator -> localhost | |*+.oEo | 2026-01-08 00:00:28.482902 | orchestrator -> localhost | |@o+=o . . | 2026-01-08 00:00:28.482923 | orchestrator -> localhost | |=%+ .. . . | 2026-01-08 00:00:28.482943 | orchestrator -> localhost | |OooB . + | 2026-01-08 00:00:28.482963 | orchestrator -> localhost | |++++=. S + | 2026-01-08 00:00:28.482992 | orchestrator -> localhost | |o...=o . . o | 2026-01-08 00:00:28.483013 | orchestrator -> localhost | | . o . . . | 2026-01-08 00:00:28.483033 | orchestrator -> localhost | | . o | 2026-01-08 00:00:28.483053 | orchestrator -> localhost | | . | 2026-01-08 00:00:28.483073 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-08 00:00:28.483143 | orchestrator -> localhost | ok: Runtime: 0:00:01.573780 2026-01-08 00:00:28.491218 | 2026-01-08 00:00:28.491352 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-08 00:00:28.590322 | orchestrator | ok 2026-01-08 00:00:28.663640 | orchestrator | included: /var/lib/zuul/builds/20443b1ab3f74324ba8bcbc6fdfc2e06/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-08 00:00:28.724465 | 2026-01-08 00:00:28.724617 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-08 00:00:28.831877 | orchestrator | skipping: Conditional result was False 2026-01-08 00:00:28.840592 | 2026-01-08 00:00:28.840757 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-08 00:00:30.482922 | orchestrator | changed 2026-01-08 00:00:30.528309 | 2026-01-08 00:00:30.528463 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-08 00:00:30.831223 | orchestrator | ok 2026-01-08 00:00:30.891985 | 2026-01-08 00:00:30.892145 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-08 00:00:31.704750 | orchestrator | ok 2026-01-08 00:00:31.735474 | 2026-01-08 00:00:31.735636 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-08 00:00:32.292130 | orchestrator | ok 2026-01-08 00:00:32.311208 | 2026-01-08 00:00:32.311368 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-08 00:00:32.409506 | orchestrator | skipping: Conditional result was False 2026-01-08 00:00:32.416999 | 2026-01-08 00:00:32.417141 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-08 00:00:34.185095 | orchestrator -> localhost | changed 2026-01-08 00:00:34.201281 | 2026-01-08 00:00:34.213500 | TASK [add-build-sshkey : Add back temp key] 2026-01-08 00:00:35.213625 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/20443b1ab3f74324ba8bcbc6fdfc2e06/work/20443b1ab3f74324ba8bcbc6fdfc2e06_id_rsa (zuul-build-sshkey) 2026-01-08 00:00:35.213956 | orchestrator -> localhost | ok: Runtime: 0:00:00.019375 2026-01-08 00:00:35.221662 | 2026-01-08 00:00:35.221834 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-08 00:00:36.117203 | orchestrator | ok 2026-01-08 00:00:36.134258 | 2026-01-08 00:00:36.134404 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-08 00:00:36.195219 | orchestrator | skipping: Conditional result was False 2026-01-08 00:00:36.313139 | 2026-01-08 00:00:36.313310 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-08 00:00:37.071179 | orchestrator | ok 2026-01-08 00:00:37.115184 | 2026-01-08 00:00:37.115353 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-08 00:00:37.197881 | orchestrator | ok 2026-01-08 00:00:37.221003 | 2026-01-08 00:00:37.221157 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-08 00:00:38.429531 | orchestrator -> localhost | ok 2026-01-08 00:00:38.463670 | 2026-01-08 00:00:38.463838 | TASK [validate-host : Collect information about the host] 2026-01-08 00:00:41.062551 | orchestrator | ok 2026-01-08 00:00:41.129130 | 2026-01-08 00:00:41.129311 | TASK [validate-host : Sanitize hostname] 2026-01-08 00:00:41.314568 | orchestrator | ok 2026-01-08 00:00:41.325935 | 2026-01-08 00:00:41.326089 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-08 00:00:43.656806 | orchestrator -> localhost | changed 2026-01-08 00:00:43.663925 | 2026-01-08 00:00:43.664061 | TASK [validate-host : Collect information about zuul worker] 2026-01-08 00:00:44.571223 | orchestrator | ok 2026-01-08 00:00:44.597447 | 2026-01-08 00:00:44.597609 | TASK [validate-host : Write out all zuul information for each host] 2026-01-08 00:00:47.096232 | orchestrator -> localhost | changed 2026-01-08 00:00:47.116905 | 2026-01-08 00:00:47.117072 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-08 00:00:47.492989 | orchestrator | ok 2026-01-08 00:00:47.506327 | 2026-01-08 00:00:47.506468 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-08 00:02:11.199463 | orchestrator | changed: 2026-01-08 00:02:11.199810 | orchestrator | .d..t...... src/ 2026-01-08 00:02:11.199851 | orchestrator | .d..t...... src/github.com/ 2026-01-08 00:02:11.199877 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-08 00:02:11.199900 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-08 00:02:11.199921 | orchestrator | RedHat.yml 2026-01-08 00:02:11.214688 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-08 00:02:11.214707 | orchestrator | RedHat.yml 2026-01-08 00:02:11.214761 | orchestrator | = 1.53.0"... 2026-01-08 00:02:22.049736 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-08 00:02:22.184391 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-08 00:02:22.971373 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-08 00:02:23.035205 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-08 00:02:23.579217 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-08 00:02:23.639809 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-08 00:02:24.069934 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-08 00:02:24.070061 | orchestrator | 2026-01-08 00:02:24.070071 | orchestrator | Providers are signed by their developers. 2026-01-08 00:02:24.070077 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-08 00:02:24.070081 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-08 00:02:24.070089 | orchestrator | 2026-01-08 00:02:24.070093 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-08 00:02:24.070098 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-08 00:02:24.070110 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-08 00:02:24.070115 | orchestrator | you run "tofu init" in the future. 2026-01-08 00:02:24.070119 | orchestrator | 2026-01-08 00:02:24.070123 | orchestrator | OpenTofu has been successfully initialized! 2026-01-08 00:02:24.070127 | orchestrator | 2026-01-08 00:02:24.070131 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-08 00:02:24.070135 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-08 00:02:24.070139 | orchestrator | should now work. 2026-01-08 00:02:24.070142 | orchestrator | 2026-01-08 00:02:24.070146 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-08 00:02:24.070150 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-08 00:02:24.070155 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-08 00:02:24.248976 | orchestrator | Created and switched to workspace "ci"! 2026-01-08 00:02:24.249068 | orchestrator | 2026-01-08 00:02:24.249080 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-08 00:02:24.249089 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-08 00:02:24.249097 | orchestrator | for this configuration. 2026-01-08 00:02:24.344606 | orchestrator | ci.auto.tfvars 2026-01-08 00:02:25.204612 | orchestrator | default_custom.tf 2026-01-08 00:02:28.201853 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-08 00:02:28.785855 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-08 00:02:29.100199 | orchestrator | 2026-01-08 00:02:29.100278 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-08 00:02:29.100288 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-08 00:02:29.100294 | orchestrator | + create 2026-01-08 00:02:29.100299 | orchestrator | <= read (data resources) 2026-01-08 00:02:29.100304 | orchestrator | 2026-01-08 00:02:29.100309 | orchestrator | OpenTofu will perform the following actions: 2026-01-08 00:02:29.100322 | orchestrator | 2026-01-08 00:02:29.100326 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-08 00:02:29.100330 | orchestrator | # (config refers to values not yet known) 2026-01-08 00:02:29.100334 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-08 00:02:29.100339 | orchestrator | + checksum = (known after apply) 2026-01-08 00:02:29.100343 | orchestrator | + created_at = (known after apply) 2026-01-08 00:02:29.100347 | orchestrator | + file = (known after apply) 2026-01-08 00:02:29.100351 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.100373 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.100377 | orchestrator | + min_disk_gb = (known after apply) 2026-01-08 00:02:29.100381 | orchestrator | + min_ram_mb = (known after apply) 2026-01-08 00:02:29.100385 | orchestrator | + most_recent = true 2026-01-08 00:02:29.100389 | orchestrator | + name = (known after apply) 2026-01-08 00:02:29.100393 | orchestrator | + protected = (known after apply) 2026-01-08 00:02:29.100396 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.100404 | orchestrator | + schema = (known after apply) 2026-01-08 00:02:29.100408 | orchestrator | + size_bytes = (known after apply) 2026-01-08 00:02:29.100412 | orchestrator | + tags = (known after apply) 2026-01-08 00:02:29.100416 | orchestrator | + updated_at = (known after apply) 2026-01-08 00:02:29.100420 | orchestrator | } 2026-01-08 00:02:29.100607 | orchestrator | 2026-01-08 00:02:29.100614 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-08 00:02:29.100618 | orchestrator | # (config refers to values not yet known) 2026-01-08 00:02:29.100622 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-08 00:02:29.100626 | orchestrator | + checksum = (known after apply) 2026-01-08 00:02:29.100630 | orchestrator | + created_at = (known after apply) 2026-01-08 00:02:29.100633 | orchestrator | + file = (known after apply) 2026-01-08 00:02:29.100637 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.100641 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.100645 | orchestrator | + min_disk_gb = (known after apply) 2026-01-08 00:02:29.100648 | orchestrator | + min_ram_mb = (known after apply) 2026-01-08 00:02:29.100652 | orchestrator | + most_recent = true 2026-01-08 00:02:29.100656 | orchestrator | + name = (known after apply) 2026-01-08 00:02:29.100660 | orchestrator | + protected = (known after apply) 2026-01-08 00:02:29.100663 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.100667 | orchestrator | + schema = (known after apply) 2026-01-08 00:02:29.100671 | orchestrator | + size_bytes = (known after apply) 2026-01-08 00:02:29.100675 | orchestrator | + tags = (known after apply) 2026-01-08 00:02:29.100679 | orchestrator | + updated_at = (known after apply) 2026-01-08 00:02:29.100682 | orchestrator | } 2026-01-08 00:02:29.100688 | orchestrator | 2026-01-08 00:02:29.100692 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-08 00:02:29.100696 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-08 00:02:29.100700 | orchestrator | + content = (known after apply) 2026-01-08 00:02:29.100704 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-08 00:02:29.100708 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-08 00:02:29.100712 | orchestrator | + content_md5 = (known after apply) 2026-01-08 00:02:29.100715 | orchestrator | + content_sha1 = (known after apply) 2026-01-08 00:02:29.100719 | orchestrator | + content_sha256 = (known after apply) 2026-01-08 00:02:29.100723 | orchestrator | + content_sha512 = (known after apply) 2026-01-08 00:02:29.100727 | orchestrator | + directory_permission = "0777" 2026-01-08 00:02:29.100730 | orchestrator | + file_permission = "0644" 2026-01-08 00:02:29.100734 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-08 00:02:29.100738 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.100742 | orchestrator | } 2026-01-08 00:02:29.100749 | orchestrator | 2026-01-08 00:02:29.100754 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-08 00:02:29.100758 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-08 00:02:29.100762 | orchestrator | + content = (known after apply) 2026-01-08 00:02:29.100766 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-08 00:02:29.100770 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-08 00:02:29.100773 | orchestrator | + content_md5 = (known after apply) 2026-01-08 00:02:29.100777 | orchestrator | + content_sha1 = (known after apply) 2026-01-08 00:02:29.100781 | orchestrator | + content_sha256 = (known after apply) 2026-01-08 00:02:29.100784 | orchestrator | + content_sha512 = (known after apply) 2026-01-08 00:02:29.100788 | orchestrator | + directory_permission = "0777" 2026-01-08 00:02:29.100792 | orchestrator | + file_permission = "0644" 2026-01-08 00:02:29.100800 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-08 00:02:29.100804 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.100808 | orchestrator | } 2026-01-08 00:02:29.100890 | orchestrator | 2026-01-08 00:02:29.100904 | orchestrator | # local_file.inventory will be created 2026-01-08 00:02:29.100908 | orchestrator | + resource "local_file" "inventory" { 2026-01-08 00:02:29.100912 | orchestrator | + content = (known after apply) 2026-01-08 00:02:29.100916 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-08 00:02:29.100919 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-08 00:02:29.100923 | orchestrator | + content_md5 = (known after apply) 2026-01-08 00:02:29.100927 | orchestrator | + content_sha1 = (known after apply) 2026-01-08 00:02:29.100931 | orchestrator | + content_sha256 = (known after apply) 2026-01-08 00:02:29.100935 | orchestrator | + content_sha512 = (known after apply) 2026-01-08 00:02:29.100938 | orchestrator | + directory_permission = "0777" 2026-01-08 00:02:29.100942 | orchestrator | + file_permission = "0644" 2026-01-08 00:02:29.100946 | orchestrator | + filename = "inventory.ci" 2026-01-08 00:02:29.100950 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.100953 | orchestrator | } 2026-01-08 00:02:29.100995 | orchestrator | 2026-01-08 00:02:29.101001 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-08 00:02:29.101005 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-08 00:02:29.101008 | orchestrator | + content = (sensitive value) 2026-01-08 00:02:29.101012 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-08 00:02:29.101016 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-08 00:02:29.101020 | orchestrator | + content_md5 = (known after apply) 2026-01-08 00:02:29.101024 | orchestrator | + content_sha1 = (known after apply) 2026-01-08 00:02:29.101027 | orchestrator | + content_sha256 = (known after apply) 2026-01-08 00:02:29.101031 | orchestrator | + content_sha512 = (known after apply) 2026-01-08 00:02:29.101035 | orchestrator | + directory_permission = "0700" 2026-01-08 00:02:29.101039 | orchestrator | + file_permission = "0600" 2026-01-08 00:02:29.101042 | orchestrator | + filename = ".id_rsa.ci" 2026-01-08 00:02:29.101046 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101050 | orchestrator | } 2026-01-08 00:02:29.101055 | orchestrator | 2026-01-08 00:02:29.101059 | orchestrator | # null_resource.node_semaphore will be created 2026-01-08 00:02:29.101063 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-08 00:02:29.101067 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101071 | orchestrator | } 2026-01-08 00:02:29.101111 | orchestrator | 2026-01-08 00:02:29.101116 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-08 00:02:29.101120 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-08 00:02:29.101124 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101128 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101132 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101136 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.101139 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101143 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-08 00:02:29.101147 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101151 | orchestrator | + size = 80 2026-01-08 00:02:29.101155 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101158 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101162 | orchestrator | } 2026-01-08 00:02:29.101167 | orchestrator | 2026-01-08 00:02:29.101171 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-08 00:02:29.101175 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-08 00:02:29.101179 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101183 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101187 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101196 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.101200 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101204 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-08 00:02:29.101208 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101212 | orchestrator | + size = 80 2026-01-08 00:02:29.101215 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101219 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101223 | orchestrator | } 2026-01-08 00:02:29.101281 | orchestrator | 2026-01-08 00:02:29.101287 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-08 00:02:29.101290 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-08 00:02:29.101294 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101298 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101302 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101306 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.101309 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101313 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-08 00:02:29.101317 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101321 | orchestrator | + size = 80 2026-01-08 00:02:29.101325 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101329 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101333 | orchestrator | } 2026-01-08 00:02:29.101367 | orchestrator | 2026-01-08 00:02:29.101373 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-08 00:02:29.101376 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-08 00:02:29.101380 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101384 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101388 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101392 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.101395 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101399 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-08 00:02:29.101403 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101407 | orchestrator | + size = 80 2026-01-08 00:02:29.101411 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101414 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101418 | orchestrator | } 2026-01-08 00:02:29.101462 | orchestrator | 2026-01-08 00:02:29.101467 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-08 00:02:29.101471 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-08 00:02:29.101475 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101479 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101483 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101486 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.101490 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101497 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-08 00:02:29.101501 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101504 | orchestrator | + size = 80 2026-01-08 00:02:29.101508 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101512 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101516 | orchestrator | } 2026-01-08 00:02:29.101521 | orchestrator | 2026-01-08 00:02:29.101525 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-08 00:02:29.101529 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-08 00:02:29.101532 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101536 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101540 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101547 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.101551 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101554 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-08 00:02:29.101558 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101562 | orchestrator | + size = 80 2026-01-08 00:02:29.101566 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101570 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101573 | orchestrator | } 2026-01-08 00:02:29.101578 | orchestrator | 2026-01-08 00:02:29.101582 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-08 00:02:29.101586 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-08 00:02:29.101590 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101594 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101597 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101601 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.101605 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101609 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-08 00:02:29.101612 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101616 | orchestrator | + size = 80 2026-01-08 00:02:29.101620 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101623 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101627 | orchestrator | } 2026-01-08 00:02:29.101664 | orchestrator | 2026-01-08 00:02:29.101670 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-08 00:02:29.101674 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-08 00:02:29.101677 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101681 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101685 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101689 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101693 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-08 00:02:29.101696 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101701 | orchestrator | + size = 20 2026-01-08 00:02:29.101704 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101708 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101712 | orchestrator | } 2026-01-08 00:02:29.101750 | orchestrator | 2026-01-08 00:02:29.101758 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-08 00:02:29.101762 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-08 00:02:29.101766 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101770 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101774 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101777 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101781 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-08 00:02:29.101785 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101789 | orchestrator | + size = 20 2026-01-08 00:02:29.101793 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101796 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101800 | orchestrator | } 2026-01-08 00:02:29.101805 | orchestrator | 2026-01-08 00:02:29.101810 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-08 00:02:29.101813 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-08 00:02:29.101817 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.101821 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.101825 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.101828 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.101832 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-08 00:02:29.101836 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.101843 | orchestrator | + size = 20 2026-01-08 00:02:29.101847 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.101850 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.101854 | orchestrator | } 2026-01-08 00:02:29.102067 | orchestrator | 2026-01-08 00:02:29.102074 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-08 00:02:29.102078 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-08 00:02:29.102082 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.102086 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.102090 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.102093 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.102097 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-08 00:02:29.102101 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.102105 | orchestrator | + size = 20 2026-01-08 00:02:29.102109 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.102112 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.102116 | orchestrator | } 2026-01-08 00:02:29.102122 | orchestrator | 2026-01-08 00:02:29.102126 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-08 00:02:29.102130 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-08 00:02:29.102133 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.102137 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.102141 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.102145 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.102149 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-08 00:02:29.102152 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.102159 | orchestrator | + size = 20 2026-01-08 00:02:29.102163 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.102167 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.102171 | orchestrator | } 2026-01-08 00:02:29.102176 | orchestrator | 2026-01-08 00:02:29.102180 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-08 00:02:29.102183 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-08 00:02:29.102187 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.102191 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.102195 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.102199 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.102202 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-08 00:02:29.102206 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.102210 | orchestrator | + size = 20 2026-01-08 00:02:29.102213 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.102217 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.102221 | orchestrator | } 2026-01-08 00:02:29.102226 | orchestrator | 2026-01-08 00:02:29.102230 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-08 00:02:29.102234 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-08 00:02:29.102237 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.102241 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.102245 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.102248 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.102252 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-08 00:02:29.102256 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.102260 | orchestrator | + size = 20 2026-01-08 00:02:29.102263 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.102267 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.102271 | orchestrator | } 2026-01-08 00:02:29.102276 | orchestrator | 2026-01-08 00:02:29.102280 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-08 00:02:29.102284 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-08 00:02:29.102291 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.102295 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.102299 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.102302 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.102306 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-08 00:02:29.102310 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.102314 | orchestrator | + size = 20 2026-01-08 00:02:29.102318 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.102321 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.102325 | orchestrator | } 2026-01-08 00:02:29.102330 | orchestrator | 2026-01-08 00:02:29.102334 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-08 00:02:29.102338 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-08 00:02:29.102342 | orchestrator | + attachment = (known after apply) 2026-01-08 00:02:29.102345 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.102349 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.102353 | orchestrator | + metadata = (known after apply) 2026-01-08 00:02:29.102357 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-08 00:02:29.102360 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.102364 | orchestrator | + size = 20 2026-01-08 00:02:29.102368 | orchestrator | + volume_retype_policy = "never" 2026-01-08 00:02:29.102372 | orchestrator | + volume_type = "ssd" 2026-01-08 00:02:29.102375 | orchestrator | } 2026-01-08 00:02:29.102550 | orchestrator | 2026-01-08 00:02:29.102555 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-08 00:02:29.102559 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-08 00:02:29.102563 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-08 00:02:29.102567 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-08 00:02:29.102571 | orchestrator | + all_metadata = (known after apply) 2026-01-08 00:02:29.102574 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.102578 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.102582 | orchestrator | + config_drive = true 2026-01-08 00:02:29.102586 | orchestrator | + created = (known after apply) 2026-01-08 00:02:29.102590 | orchestrator | + flavor_id = (known after apply) 2026-01-08 00:02:29.102593 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-08 00:02:29.102597 | orchestrator | + force_delete = false 2026-01-08 00:02:29.102601 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-08 00:02:29.102605 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.102608 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.102612 | orchestrator | + image_name = (known after apply) 2026-01-08 00:02:29.102616 | orchestrator | + key_pair = "testbed" 2026-01-08 00:02:29.102620 | orchestrator | + name = "testbed-manager" 2026-01-08 00:02:29.102623 | orchestrator | + power_state = "active" 2026-01-08 00:02:29.102627 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.102631 | orchestrator | + security_groups = (known after apply) 2026-01-08 00:02:29.102635 | orchestrator | + stop_before_destroy = false 2026-01-08 00:02:29.102638 | orchestrator | + updated = (known after apply) 2026-01-08 00:02:29.102642 | orchestrator | + user_data = (sensitive value) 2026-01-08 00:02:29.102646 | orchestrator | 2026-01-08 00:02:29.102650 | orchestrator | + block_device { 2026-01-08 00:02:29.102654 | orchestrator | + boot_index = 0 2026-01-08 00:02:29.102657 | orchestrator | + delete_on_termination = false 2026-01-08 00:02:29.102664 | orchestrator | + destination_type = "volume" 2026-01-08 00:02:29.102668 | orchestrator | + multiattach = false 2026-01-08 00:02:29.102671 | orchestrator | + source_type = "volume" 2026-01-08 00:02:29.102675 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.102683 | orchestrator | } 2026-01-08 00:02:29.102686 | orchestrator | 2026-01-08 00:02:29.102690 | orchestrator | + network { 2026-01-08 00:02:29.102694 | orchestrator | + access_network = false 2026-01-08 00:02:29.102698 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-08 00:02:29.102701 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-08 00:02:29.102705 | orchestrator | + mac = (known after apply) 2026-01-08 00:02:29.102709 | orchestrator | + name = (known after apply) 2026-01-08 00:02:29.102713 | orchestrator | + port = (known after apply) 2026-01-08 00:02:29.102716 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.102720 | orchestrator | } 2026-01-08 00:02:29.102724 | orchestrator | } 2026-01-08 00:02:29.102763 | orchestrator | 2026-01-08 00:02:29.102768 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-08 00:02:29.102772 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-08 00:02:29.102776 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-08 00:02:29.102780 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-08 00:02:29.102784 | orchestrator | + all_metadata = (known after apply) 2026-01-08 00:02:29.102787 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.102791 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.102795 | orchestrator | + config_drive = true 2026-01-08 00:02:29.102799 | orchestrator | + created = (known after apply) 2026-01-08 00:02:29.102803 | orchestrator | + flavor_id = (known after apply) 2026-01-08 00:02:29.102806 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-08 00:02:29.102810 | orchestrator | + force_delete = false 2026-01-08 00:02:29.102814 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-08 00:02:29.102818 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.102822 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.102826 | orchestrator | + image_name = (known after apply) 2026-01-08 00:02:29.102829 | orchestrator | + key_pair = "testbed" 2026-01-08 00:02:29.102833 | orchestrator | + name = "testbed-node-0" 2026-01-08 00:02:29.102837 | orchestrator | + power_state = "active" 2026-01-08 00:02:29.102841 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.102845 | orchestrator | + security_groups = (known after apply) 2026-01-08 00:02:29.102848 | orchestrator | + stop_before_destroy = false 2026-01-08 00:02:29.102852 | orchestrator | + updated = (known after apply) 2026-01-08 00:02:29.102856 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-08 00:02:29.102860 | orchestrator | 2026-01-08 00:02:29.102864 | orchestrator | + block_device { 2026-01-08 00:02:29.102868 | orchestrator | + boot_index = 0 2026-01-08 00:02:29.102871 | orchestrator | + delete_on_termination = false 2026-01-08 00:02:29.102875 | orchestrator | + destination_type = "volume" 2026-01-08 00:02:29.102879 | orchestrator | + multiattach = false 2026-01-08 00:02:29.102883 | orchestrator | + source_type = "volume" 2026-01-08 00:02:29.102887 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.102890 | orchestrator | } 2026-01-08 00:02:29.102894 | orchestrator | 2026-01-08 00:02:29.102898 | orchestrator | + network { 2026-01-08 00:02:29.102902 | orchestrator | + access_network = false 2026-01-08 00:02:29.102906 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-08 00:02:29.102910 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-08 00:02:29.102914 | orchestrator | + mac = (known after apply) 2026-01-08 00:02:29.102918 | orchestrator | + name = (known after apply) 2026-01-08 00:02:29.102922 | orchestrator | + port = (known after apply) 2026-01-08 00:02:29.102925 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.102929 | orchestrator | } 2026-01-08 00:02:29.102933 | orchestrator | } 2026-01-08 00:02:29.103011 | orchestrator | 2026-01-08 00:02:29.103018 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-08 00:02:29.103022 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-08 00:02:29.103026 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-08 00:02:29.103034 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-08 00:02:29.103038 | orchestrator | + all_metadata = (known after apply) 2026-01-08 00:02:29.103042 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.103045 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.103049 | orchestrator | + config_drive = true 2026-01-08 00:02:29.103053 | orchestrator | + created = (known after apply) 2026-01-08 00:02:29.103057 | orchestrator | + flavor_id = (known after apply) 2026-01-08 00:02:29.103061 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-08 00:02:29.103065 | orchestrator | + force_delete = false 2026-01-08 00:02:29.103068 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-08 00:02:29.103072 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.103076 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.103080 | orchestrator | + image_name = (known after apply) 2026-01-08 00:02:29.103084 | orchestrator | + key_pair = "testbed" 2026-01-08 00:02:29.103087 | orchestrator | + name = "testbed-node-1" 2026-01-08 00:02:29.103091 | orchestrator | + power_state = "active" 2026-01-08 00:02:29.103095 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.103099 | orchestrator | + security_groups = (known after apply) 2026-01-08 00:02:29.103102 | orchestrator | + stop_before_destroy = false 2026-01-08 00:02:29.103106 | orchestrator | + updated = (known after apply) 2026-01-08 00:02:29.103110 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-08 00:02:29.103114 | orchestrator | 2026-01-08 00:02:29.103118 | orchestrator | + block_device { 2026-01-08 00:02:29.103122 | orchestrator | + boot_index = 0 2026-01-08 00:02:29.103125 | orchestrator | + delete_on_termination = false 2026-01-08 00:02:29.103129 | orchestrator | + destination_type = "volume" 2026-01-08 00:02:29.103133 | orchestrator | + multiattach = false 2026-01-08 00:02:29.103136 | orchestrator | + source_type = "volume" 2026-01-08 00:02:29.103140 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.103144 | orchestrator | } 2026-01-08 00:02:29.103148 | orchestrator | 2026-01-08 00:02:29.103152 | orchestrator | + network { 2026-01-08 00:02:29.103155 | orchestrator | + access_network = false 2026-01-08 00:02:29.103159 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-08 00:02:29.103163 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-08 00:02:29.103167 | orchestrator | + mac = (known after apply) 2026-01-08 00:02:29.103170 | orchestrator | + name = (known after apply) 2026-01-08 00:02:29.103174 | orchestrator | + port = (known after apply) 2026-01-08 00:02:29.103178 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.103182 | orchestrator | } 2026-01-08 00:02:29.103186 | orchestrator | } 2026-01-08 00:02:29.103191 | orchestrator | 2026-01-08 00:02:29.103195 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-08 00:02:29.103199 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-08 00:02:29.103203 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-08 00:02:29.103206 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-08 00:02:29.103211 | orchestrator | + all_metadata = (known after apply) 2026-01-08 00:02:29.103214 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.103221 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.103225 | orchestrator | + config_drive = true 2026-01-08 00:02:29.103229 | orchestrator | + created = (known after apply) 2026-01-08 00:02:29.103233 | orchestrator | + flavor_id = (known after apply) 2026-01-08 00:02:29.103236 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-08 00:02:29.103240 | orchestrator | + force_delete = false 2026-01-08 00:02:29.103244 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-08 00:02:29.103248 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.103251 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.103258 | orchestrator | + image_name = (known after apply) 2026-01-08 00:02:29.103262 | orchestrator | + key_pair = "testbed" 2026-01-08 00:02:29.103266 | orchestrator | + name = "testbed-node-2" 2026-01-08 00:02:29.103270 | orchestrator | + power_state = "active" 2026-01-08 00:02:29.103273 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.103277 | orchestrator | + security_groups = (known after apply) 2026-01-08 00:02:29.103281 | orchestrator | + stop_before_destroy = false 2026-01-08 00:02:29.103285 | orchestrator | + updated = (known after apply) 2026-01-08 00:02:29.103289 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-08 00:02:29.103292 | orchestrator | 2026-01-08 00:02:29.103296 | orchestrator | + block_device { 2026-01-08 00:02:29.103300 | orchestrator | + boot_index = 0 2026-01-08 00:02:29.103304 | orchestrator | + delete_on_termination = false 2026-01-08 00:02:29.103307 | orchestrator | + destination_type = "volume" 2026-01-08 00:02:29.103311 | orchestrator | + multiattach = false 2026-01-08 00:02:29.103315 | orchestrator | + source_type = "volume" 2026-01-08 00:02:29.103319 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.103323 | orchestrator | } 2026-01-08 00:02:29.103326 | orchestrator | 2026-01-08 00:02:29.103330 | orchestrator | + network { 2026-01-08 00:02:29.103334 | orchestrator | + access_network = false 2026-01-08 00:02:29.103338 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-08 00:02:29.103341 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-08 00:02:29.103345 | orchestrator | + mac = (known after apply) 2026-01-08 00:02:29.103349 | orchestrator | + name = (known after apply) 2026-01-08 00:02:29.103353 | orchestrator | + port = (known after apply) 2026-01-08 00:02:29.103356 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.103360 | orchestrator | } 2026-01-08 00:02:29.103364 | orchestrator | } 2026-01-08 00:02:29.103451 | orchestrator | 2026-01-08 00:02:29.103457 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-08 00:02:29.103461 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-08 00:02:29.103464 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-08 00:02:29.103468 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-08 00:02:29.103472 | orchestrator | + all_metadata = (known after apply) 2026-01-08 00:02:29.103476 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.103480 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.103484 | orchestrator | + config_drive = true 2026-01-08 00:02:29.103487 | orchestrator | + created = (known after apply) 2026-01-08 00:02:29.103491 | orchestrator | + flavor_id = (known after apply) 2026-01-08 00:02:29.103495 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-08 00:02:29.103499 | orchestrator | + force_delete = false 2026-01-08 00:02:29.103502 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-08 00:02:29.103506 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.103510 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.103514 | orchestrator | + image_name = (known after apply) 2026-01-08 00:02:29.103518 | orchestrator | + key_pair = "testbed" 2026-01-08 00:02:29.103521 | orchestrator | + name = "testbed-node-3" 2026-01-08 00:02:29.103525 | orchestrator | + power_state = "active" 2026-01-08 00:02:29.103529 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.103533 | orchestrator | + security_groups = (known after apply) 2026-01-08 00:02:29.103536 | orchestrator | + stop_before_destroy = false 2026-01-08 00:02:29.103540 | orchestrator | + updated = (known after apply) 2026-01-08 00:02:29.103544 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-08 00:02:29.103548 | orchestrator | 2026-01-08 00:02:29.103552 | orchestrator | + block_device { 2026-01-08 00:02:29.103558 | orchestrator | + boot_index = 0 2026-01-08 00:02:29.103561 | orchestrator | + delete_on_termination = false 2026-01-08 00:02:29.103565 | orchestrator | + destination_type = "volume" 2026-01-08 00:02:29.103574 | orchestrator | + multiattach = false 2026-01-08 00:02:29.103578 | orchestrator | + source_type = "volume" 2026-01-08 00:02:29.103582 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.103585 | orchestrator | } 2026-01-08 00:02:29.103589 | orchestrator | 2026-01-08 00:02:29.103593 | orchestrator | + network { 2026-01-08 00:02:29.103597 | orchestrator | + access_network = false 2026-01-08 00:02:29.103600 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-08 00:02:29.103604 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-08 00:02:29.103608 | orchestrator | + mac = (known after apply) 2026-01-08 00:02:29.103612 | orchestrator | + name = (known after apply) 2026-01-08 00:02:29.103615 | orchestrator | + port = (known after apply) 2026-01-08 00:02:29.103619 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.103623 | orchestrator | } 2026-01-08 00:02:29.103627 | orchestrator | } 2026-01-08 00:02:29.103632 | orchestrator | 2026-01-08 00:02:29.103636 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-08 00:02:29.103640 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-08 00:02:29.103644 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-08 00:02:29.103648 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-08 00:02:29.103651 | orchestrator | + all_metadata = (known after apply) 2026-01-08 00:02:29.103655 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.103659 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.103663 | orchestrator | + config_drive = true 2026-01-08 00:02:29.103666 | orchestrator | + created = (known after apply) 2026-01-08 00:02:29.103721 | orchestrator | + flavor_id = (known after apply) 2026-01-08 00:02:29.103725 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-08 00:02:29.103729 | orchestrator | + force_delete = false 2026-01-08 00:02:29.103733 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-08 00:02:29.103736 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.103740 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.103744 | orchestrator | + image_name = (known after apply) 2026-01-08 00:02:29.103748 | orchestrator | + key_pair = "testbed" 2026-01-08 00:02:29.103751 | orchestrator | + name = "testbed-node-4" 2026-01-08 00:02:29.103755 | orchestrator | + power_state = "active" 2026-01-08 00:02:29.103759 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.103763 | orchestrator | + security_groups = (known after apply) 2026-01-08 00:02:29.103767 | orchestrator | + stop_before_destroy = false 2026-01-08 00:02:29.103770 | orchestrator | + updated = (known after apply) 2026-01-08 00:02:29.103774 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-08 00:02:29.103778 | orchestrator | 2026-01-08 00:02:29.103782 | orchestrator | + block_device { 2026-01-08 00:02:29.103786 | orchestrator | + boot_index = 0 2026-01-08 00:02:29.103789 | orchestrator | + delete_on_termination = false 2026-01-08 00:02:29.103793 | orchestrator | + destination_type = "volume" 2026-01-08 00:02:29.103797 | orchestrator | + multiattach = false 2026-01-08 00:02:29.103801 | orchestrator | + source_type = "volume" 2026-01-08 00:02:29.103805 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.103808 | orchestrator | } 2026-01-08 00:02:29.103812 | orchestrator | 2026-01-08 00:02:29.103816 | orchestrator | + network { 2026-01-08 00:02:29.103820 | orchestrator | + access_network = false 2026-01-08 00:02:29.103824 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-08 00:02:29.103827 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-08 00:02:29.103831 | orchestrator | + mac = (known after apply) 2026-01-08 00:02:29.103835 | orchestrator | + name = (known after apply) 2026-01-08 00:02:29.103838 | orchestrator | + port = (known after apply) 2026-01-08 00:02:29.103842 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.103846 | orchestrator | } 2026-01-08 00:02:29.103850 | orchestrator | } 2026-01-08 00:02:29.103860 | orchestrator | 2026-01-08 00:02:29.103864 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-08 00:02:29.103868 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-08 00:02:29.103872 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-08 00:02:29.103875 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-08 00:02:29.103879 | orchestrator | + all_metadata = (known after apply) 2026-01-08 00:02:29.103883 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.103887 | orchestrator | + availability_zone = "nova" 2026-01-08 00:02:29.103890 | orchestrator | + config_drive = true 2026-01-08 00:02:29.103918 | orchestrator | + created = (known after apply) 2026-01-08 00:02:29.103923 | orchestrator | + flavor_id = (known after apply) 2026-01-08 00:02:29.103927 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-08 00:02:29.103930 | orchestrator | + force_delete = false 2026-01-08 00:02:29.103937 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-08 00:02:29.103941 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.103945 | orchestrator | + image_id = (known after apply) 2026-01-08 00:02:29.103949 | orchestrator | + image_name = (known after apply) 2026-01-08 00:02:29.103952 | orchestrator | + key_pair = "testbed" 2026-01-08 00:02:29.103956 | orchestrator | + name = "testbed-node-5" 2026-01-08 00:02:29.103960 | orchestrator | + power_state = "active" 2026-01-08 00:02:29.103964 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.103967 | orchestrator | + security_groups = (known after apply) 2026-01-08 00:02:29.103971 | orchestrator | + stop_before_destroy = false 2026-01-08 00:02:29.103975 | orchestrator | + updated = (known after apply) 2026-01-08 00:02:29.103979 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-08 00:02:29.103993 | orchestrator | 2026-01-08 00:02:29.103997 | orchestrator | + block_device { 2026-01-08 00:02:29.104001 | orchestrator | + boot_index = 0 2026-01-08 00:02:29.104005 | orchestrator | + delete_on_termination = false 2026-01-08 00:02:29.104008 | orchestrator | + destination_type = "volume" 2026-01-08 00:02:29.104012 | orchestrator | + multiattach = false 2026-01-08 00:02:29.104016 | orchestrator | + source_type = "volume" 2026-01-08 00:02:29.104020 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.104024 | orchestrator | } 2026-01-08 00:02:29.104028 | orchestrator | 2026-01-08 00:02:29.104031 | orchestrator | + network { 2026-01-08 00:02:29.104035 | orchestrator | + access_network = false 2026-01-08 00:02:29.104039 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-08 00:02:29.104043 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-08 00:02:29.104047 | orchestrator | + mac = (known after apply) 2026-01-08 00:02:29.104050 | orchestrator | + name = (known after apply) 2026-01-08 00:02:29.104054 | orchestrator | + port = (known after apply) 2026-01-08 00:02:29.104058 | orchestrator | + uuid = (known after apply) 2026-01-08 00:02:29.104062 | orchestrator | } 2026-01-08 00:02:29.104066 | orchestrator | } 2026-01-08 00:02:29.104071 | orchestrator | 2026-01-08 00:02:29.104075 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-08 00:02:29.104079 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-08 00:02:29.104083 | orchestrator | + fingerprint = (known after apply) 2026-01-08 00:02:29.104087 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104091 | orchestrator | + name = "testbed" 2026-01-08 00:02:29.104094 | orchestrator | + private_key = (sensitive value) 2026-01-08 00:02:29.104098 | orchestrator | + public_key = (known after apply) 2026-01-08 00:02:29.104102 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104105 | orchestrator | + user_id = (known after apply) 2026-01-08 00:02:29.104109 | orchestrator | } 2026-01-08 00:02:29.104113 | orchestrator | 2026-01-08 00:02:29.104117 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-08 00:02:29.104121 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-08 00:02:29.104130 | orchestrator | + device = (known after apply) 2026-01-08 00:02:29.104134 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104138 | orchestrator | + instance_id = (known after apply) 2026-01-08 00:02:29.104141 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104145 | orchestrator | + volume_id = (known after apply) 2026-01-08 00:02:29.104149 | orchestrator | } 2026-01-08 00:02:29.104153 | orchestrator | 2026-01-08 00:02:29.104157 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-08 00:02:29.104160 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-08 00:02:29.104164 | orchestrator | + device = (known after apply) 2026-01-08 00:02:29.104168 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104172 | orchestrator | + instance_id = (known after apply) 2026-01-08 00:02:29.104175 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104179 | orchestrator | + volume_id = (known after apply) 2026-01-08 00:02:29.104183 | orchestrator | } 2026-01-08 00:02:29.104186 | orchestrator | 2026-01-08 00:02:29.104190 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-08 00:02:29.104194 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-08 00:02:29.104198 | orchestrator | + device = (known after apply) 2026-01-08 00:02:29.104202 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104205 | orchestrator | + instance_id = (known after apply) 2026-01-08 00:02:29.104209 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104213 | orchestrator | + volume_id = (known after apply) 2026-01-08 00:02:29.104217 | orchestrator | } 2026-01-08 00:02:29.104220 | orchestrator | 2026-01-08 00:02:29.104224 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-08 00:02:29.104228 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-08 00:02:29.104232 | orchestrator | + device = (known after apply) 2026-01-08 00:02:29.104235 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104239 | orchestrator | + instance_id = (known after apply) 2026-01-08 00:02:29.104243 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104247 | orchestrator | + volume_id = (known after apply) 2026-01-08 00:02:29.104251 | orchestrator | } 2026-01-08 00:02:29.104254 | orchestrator | 2026-01-08 00:02:29.104258 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-08 00:02:29.104262 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-08 00:02:29.104266 | orchestrator | + device = (known after apply) 2026-01-08 00:02:29.104270 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104273 | orchestrator | + instance_id = (known after apply) 2026-01-08 00:02:29.104280 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104283 | orchestrator | + volume_id = (known after apply) 2026-01-08 00:02:29.104287 | orchestrator | } 2026-01-08 00:02:29.104293 | orchestrator | 2026-01-08 00:02:29.104296 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-08 00:02:29.104300 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-08 00:02:29.104304 | orchestrator | + device = (known after apply) 2026-01-08 00:02:29.104308 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104311 | orchestrator | + instance_id = (known after apply) 2026-01-08 00:02:29.104315 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104319 | orchestrator | + volume_id = (known after apply) 2026-01-08 00:02:29.104323 | orchestrator | } 2026-01-08 00:02:29.104326 | orchestrator | 2026-01-08 00:02:29.104330 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-08 00:02:29.104334 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-08 00:02:29.104338 | orchestrator | + device = (known after apply) 2026-01-08 00:02:29.104341 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104345 | orchestrator | + instance_id = (known after apply) 2026-01-08 00:02:29.104349 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104355 | orchestrator | + volume_id = (known after apply) 2026-01-08 00:02:29.104359 | orchestrator | } 2026-01-08 00:02:29.104363 | orchestrator | 2026-01-08 00:02:29.104367 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-08 00:02:29.104371 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-08 00:02:29.104374 | orchestrator | + device = (known after apply) 2026-01-08 00:02:29.104378 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104382 | orchestrator | + instance_id = (known after apply) 2026-01-08 00:02:29.104386 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104389 | orchestrator | + volume_id = (known after apply) 2026-01-08 00:02:29.104393 | orchestrator | } 2026-01-08 00:02:29.104397 | orchestrator | 2026-01-08 00:02:29.104401 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-08 00:02:29.104405 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-08 00:02:29.104408 | orchestrator | + device = (known after apply) 2026-01-08 00:02:29.104412 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104416 | orchestrator | + instance_id = (known after apply) 2026-01-08 00:02:29.104420 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104423 | orchestrator | + volume_id = (known after apply) 2026-01-08 00:02:29.104427 | orchestrator | } 2026-01-08 00:02:29.104431 | orchestrator | 2026-01-08 00:02:29.104435 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-08 00:02:29.104439 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-08 00:02:29.104443 | orchestrator | + fixed_ip = (known after apply) 2026-01-08 00:02:29.104447 | orchestrator | + floating_ip = (known after apply) 2026-01-08 00:02:29.104451 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104455 | orchestrator | + port_id = (known after apply) 2026-01-08 00:02:29.104458 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104462 | orchestrator | } 2026-01-08 00:02:29.104466 | orchestrator | 2026-01-08 00:02:29.104470 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-08 00:02:29.104473 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-08 00:02:29.104477 | orchestrator | + address = (known after apply) 2026-01-08 00:02:29.104481 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.104485 | orchestrator | + dns_domain = (known after apply) 2026-01-08 00:02:29.104488 | orchestrator | + dns_name = (known after apply) 2026-01-08 00:02:29.104492 | orchestrator | + fixed_ip = (known after apply) 2026-01-08 00:02:29.104496 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104500 | orchestrator | + pool = "public" 2026-01-08 00:02:29.104503 | orchestrator | + port_id = (known after apply) 2026-01-08 00:02:29.104507 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104511 | orchestrator | + subnet_id = (known after apply) 2026-01-08 00:02:29.104515 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.104518 | orchestrator | } 2026-01-08 00:02:29.104524 | orchestrator | 2026-01-08 00:02:29.104527 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-08 00:02:29.104531 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-08 00:02:29.104535 | orchestrator | + admin_state_up = (known after apply) 2026-01-08 00:02:29.104539 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.104543 | orchestrator | + availability_zone_hints = [ 2026-01-08 00:02:29.104546 | orchestrator | + "nova", 2026-01-08 00:02:29.104550 | orchestrator | ] 2026-01-08 00:02:29.104554 | orchestrator | + dns_domain = (known after apply) 2026-01-08 00:02:29.104558 | orchestrator | + external = (known after apply) 2026-01-08 00:02:29.104562 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104565 | orchestrator | + mtu = (known after apply) 2026-01-08 00:02:29.104569 | orchestrator | + name = "net-testbed-management" 2026-01-08 00:02:29.104573 | orchestrator | + port_security_enabled = (known after apply) 2026-01-08 00:02:29.104580 | orchestrator | + qos_policy_id = (known after apply) 2026-01-08 00:02:29.104583 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104587 | orchestrator | + shared = (known after apply) 2026-01-08 00:02:29.104591 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.104595 | orchestrator | + transparent_vlan = (known after apply) 2026-01-08 00:02:29.104598 | orchestrator | 2026-01-08 00:02:29.104602 | orchestrator | + segments (known after apply) 2026-01-08 00:02:29.104606 | orchestrator | } 2026-01-08 00:02:29.104610 | orchestrator | 2026-01-08 00:02:29.104614 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-08 00:02:29.104617 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-08 00:02:29.104621 | orchestrator | + admin_state_up = (known after apply) 2026-01-08 00:02:29.104625 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-08 00:02:29.104629 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-08 00:02:29.104635 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.104639 | orchestrator | + device_id = (known after apply) 2026-01-08 00:02:29.104643 | orchestrator | + device_owner = (known after apply) 2026-01-08 00:02:29.104646 | orchestrator | + dns_assignment = (known after apply) 2026-01-08 00:02:29.104650 | orchestrator | + dns_name = (known after apply) 2026-01-08 00:02:29.104654 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104658 | orchestrator | + mac_address = (known after apply) 2026-01-08 00:02:29.104661 | orchestrator | + network_id = (known after apply) 2026-01-08 00:02:29.104665 | orchestrator | + port_security_enabled = (known after apply) 2026-01-08 00:02:29.104669 | orchestrator | + qos_policy_id = (known after apply) 2026-01-08 00:02:29.104672 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104676 | orchestrator | + security_group_ids = (known after apply) 2026-01-08 00:02:29.104680 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.104684 | orchestrator | 2026-01-08 00:02:29.104687 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.104691 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-08 00:02:29.104695 | orchestrator | } 2026-01-08 00:02:29.104699 | orchestrator | 2026-01-08 00:02:29.104702 | orchestrator | + binding (known after apply) 2026-01-08 00:02:29.104706 | orchestrator | 2026-01-08 00:02:29.104710 | orchestrator | + fixed_ip { 2026-01-08 00:02:29.104714 | orchestrator | + ip_address = "192.168.16.5" 2026-01-08 00:02:29.104718 | orchestrator | + subnet_id = (known after apply) 2026-01-08 00:02:29.104722 | orchestrator | } 2026-01-08 00:02:29.104725 | orchestrator | } 2026-01-08 00:02:29.104731 | orchestrator | 2026-01-08 00:02:29.104735 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-08 00:02:29.104738 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-08 00:02:29.104742 | orchestrator | + admin_state_up = (known after apply) 2026-01-08 00:02:29.104746 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-08 00:02:29.104750 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-08 00:02:29.104753 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.104757 | orchestrator | + device_id = (known after apply) 2026-01-08 00:02:29.104761 | orchestrator | + device_owner = (known after apply) 2026-01-08 00:02:29.104765 | orchestrator | + dns_assignment = (known after apply) 2026-01-08 00:02:29.104768 | orchestrator | + dns_name = (known after apply) 2026-01-08 00:02:29.104772 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104776 | orchestrator | + mac_address = (known after apply) 2026-01-08 00:02:29.104779 | orchestrator | + network_id = (known after apply) 2026-01-08 00:02:29.104783 | orchestrator | + port_security_enabled = (known after apply) 2026-01-08 00:02:29.104787 | orchestrator | + qos_policy_id = (known after apply) 2026-01-08 00:02:29.104791 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104797 | orchestrator | + security_group_ids = (known after apply) 2026-01-08 00:02:29.104801 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.104805 | orchestrator | 2026-01-08 00:02:29.104809 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.104812 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-08 00:02:29.104816 | orchestrator | } 2026-01-08 00:02:29.104820 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.104824 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-08 00:02:29.104828 | orchestrator | } 2026-01-08 00:02:29.104831 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.104835 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-08 00:02:29.104839 | orchestrator | } 2026-01-08 00:02:29.104843 | orchestrator | 2026-01-08 00:02:29.104846 | orchestrator | + binding (known after apply) 2026-01-08 00:02:29.104850 | orchestrator | 2026-01-08 00:02:29.104854 | orchestrator | + fixed_ip { 2026-01-08 00:02:29.104858 | orchestrator | + ip_address = "192.168.16.10" 2026-01-08 00:02:29.104862 | orchestrator | + subnet_id = (known after apply) 2026-01-08 00:02:29.104865 | orchestrator | } 2026-01-08 00:02:29.104869 | orchestrator | } 2026-01-08 00:02:29.104875 | orchestrator | 2026-01-08 00:02:29.104879 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-08 00:02:29.104882 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-08 00:02:29.104886 | orchestrator | + admin_state_up = (known after apply) 2026-01-08 00:02:29.104890 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-08 00:02:29.104894 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-08 00:02:29.104898 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.104901 | orchestrator | + device_id = (known after apply) 2026-01-08 00:02:29.104905 | orchestrator | + device_owner = (known after apply) 2026-01-08 00:02:29.104909 | orchestrator | + dns_assignment = (known after apply) 2026-01-08 00:02:29.104913 | orchestrator | + dns_name = (known after apply) 2026-01-08 00:02:29.104916 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.104920 | orchestrator | + mac_address = (known after apply) 2026-01-08 00:02:29.104924 | orchestrator | + network_id = (known after apply) 2026-01-08 00:02:29.104927 | orchestrator | + port_security_enabled = (known after apply) 2026-01-08 00:02:29.104931 | orchestrator | + qos_policy_id = (known after apply) 2026-01-08 00:02:29.104935 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.104939 | orchestrator | + security_group_ids = (known after apply) 2026-01-08 00:02:29.104943 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.104946 | orchestrator | 2026-01-08 00:02:29.104950 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.104954 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-08 00:02:29.104958 | orchestrator | } 2026-01-08 00:02:29.104962 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.104965 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-08 00:02:29.104969 | orchestrator | } 2026-01-08 00:02:29.104973 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.104977 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-08 00:02:29.104980 | orchestrator | } 2026-01-08 00:02:29.104994 | orchestrator | 2026-01-08 00:02:29.104998 | orchestrator | + binding (known after apply) 2026-01-08 00:02:29.105002 | orchestrator | 2026-01-08 00:02:29.105005 | orchestrator | + fixed_ip { 2026-01-08 00:02:29.105009 | orchestrator | + ip_address = "192.168.16.11" 2026-01-08 00:02:29.105013 | orchestrator | + subnet_id = (known after apply) 2026-01-08 00:02:29.105017 | orchestrator | } 2026-01-08 00:02:29.105021 | orchestrator | } 2026-01-08 00:02:29.105026 | orchestrator | 2026-01-08 00:02:29.105030 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-08 00:02:29.105034 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-08 00:02:29.105038 | orchestrator | + admin_state_up = (known after apply) 2026-01-08 00:02:29.105041 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-08 00:02:29.105045 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-08 00:02:29.105049 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.105055 | orchestrator | + device_id = (known after apply) 2026-01-08 00:02:29.105059 | orchestrator | + device_owner = (known after apply) 2026-01-08 00:02:29.105063 | orchestrator | + dns_assignment = (known after apply) 2026-01-08 00:02:29.105066 | orchestrator | + dns_name = (known after apply) 2026-01-08 00:02:29.105073 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105076 | orchestrator | + mac_address = (known after apply) 2026-01-08 00:02:29.105080 | orchestrator | + network_id = (known after apply) 2026-01-08 00:02:29.105084 | orchestrator | + port_security_enabled = (known after apply) 2026-01-08 00:02:29.105088 | orchestrator | + qos_policy_id = (known after apply) 2026-01-08 00:02:29.105092 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105095 | orchestrator | + security_group_ids = (known after apply) 2026-01-08 00:02:29.105099 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.105103 | orchestrator | 2026-01-08 00:02:29.105107 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105110 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-08 00:02:29.105114 | orchestrator | } 2026-01-08 00:02:29.105118 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105122 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-08 00:02:29.105126 | orchestrator | } 2026-01-08 00:02:29.105129 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105133 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-08 00:02:29.105137 | orchestrator | } 2026-01-08 00:02:29.105141 | orchestrator | 2026-01-08 00:02:29.105144 | orchestrator | + binding (known after apply) 2026-01-08 00:02:29.105148 | orchestrator | 2026-01-08 00:02:29.105152 | orchestrator | + fixed_ip { 2026-01-08 00:02:29.105156 | orchestrator | + ip_address = "192.168.16.12" 2026-01-08 00:02:29.105159 | orchestrator | + subnet_id = (known after apply) 2026-01-08 00:02:29.105163 | orchestrator | } 2026-01-08 00:02:29.105167 | orchestrator | } 2026-01-08 00:02:29.105172 | orchestrator | 2026-01-08 00:02:29.105176 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-08 00:02:29.105180 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-08 00:02:29.105184 | orchestrator | + admin_state_up = (known after apply) 2026-01-08 00:02:29.105188 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-08 00:02:29.105191 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-08 00:02:29.105195 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.105199 | orchestrator | + device_id = (known after apply) 2026-01-08 00:02:29.105203 | orchestrator | + device_owner = (known after apply) 2026-01-08 00:02:29.105206 | orchestrator | + dns_assignment = (known after apply) 2026-01-08 00:02:29.105210 | orchestrator | + dns_name = (known after apply) 2026-01-08 00:02:29.105214 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105218 | orchestrator | + mac_address = (known after apply) 2026-01-08 00:02:29.105221 | orchestrator | + network_id = (known after apply) 2026-01-08 00:02:29.105225 | orchestrator | + port_security_enabled = (known after apply) 2026-01-08 00:02:29.105229 | orchestrator | + qos_policy_id = (known after apply) 2026-01-08 00:02:29.105232 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105236 | orchestrator | + security_group_ids = (known after apply) 2026-01-08 00:02:29.105240 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.105244 | orchestrator | 2026-01-08 00:02:29.105247 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105251 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-08 00:02:29.105255 | orchestrator | } 2026-01-08 00:02:29.105259 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105263 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-08 00:02:29.105266 | orchestrator | } 2026-01-08 00:02:29.105270 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105274 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-08 00:02:29.105278 | orchestrator | } 2026-01-08 00:02:29.105281 | orchestrator | 2026-01-08 00:02:29.105288 | orchestrator | + binding (known after apply) 2026-01-08 00:02:29.105292 | orchestrator | 2026-01-08 00:02:29.105295 | orchestrator | + fixed_ip { 2026-01-08 00:02:29.105299 | orchestrator | + ip_address = "192.168.16.13" 2026-01-08 00:02:29.105303 | orchestrator | + subnet_id = (known after apply) 2026-01-08 00:02:29.105307 | orchestrator | } 2026-01-08 00:02:29.105310 | orchestrator | } 2026-01-08 00:02:29.105316 | orchestrator | 2026-01-08 00:02:29.105320 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-08 00:02:29.105323 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-08 00:02:29.105327 | orchestrator | + admin_state_up = (known after apply) 2026-01-08 00:02:29.105331 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-08 00:02:29.105335 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-08 00:02:29.105338 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.105342 | orchestrator | + device_id = (known after apply) 2026-01-08 00:02:29.105346 | orchestrator | + device_owner = (known after apply) 2026-01-08 00:02:29.105349 | orchestrator | + dns_assignment = (known after apply) 2026-01-08 00:02:29.105353 | orchestrator | + dns_name = (known after apply) 2026-01-08 00:02:29.105357 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105361 | orchestrator | + mac_address = (known after apply) 2026-01-08 00:02:29.105364 | orchestrator | + network_id = (known after apply) 2026-01-08 00:02:29.105368 | orchestrator | + port_security_enabled = (known after apply) 2026-01-08 00:02:29.105372 | orchestrator | + qos_policy_id = (known after apply) 2026-01-08 00:02:29.105376 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105379 | orchestrator | + security_group_ids = (known after apply) 2026-01-08 00:02:29.105383 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.105387 | orchestrator | 2026-01-08 00:02:29.105391 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105395 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-08 00:02:29.105399 | orchestrator | } 2026-01-08 00:02:29.105403 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105406 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-08 00:02:29.105410 | orchestrator | } 2026-01-08 00:02:29.105414 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105418 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-08 00:02:29.105422 | orchestrator | } 2026-01-08 00:02:29.105425 | orchestrator | 2026-01-08 00:02:29.105429 | orchestrator | + binding (known after apply) 2026-01-08 00:02:29.105433 | orchestrator | 2026-01-08 00:02:29.105437 | orchestrator | + fixed_ip { 2026-01-08 00:02:29.105441 | orchestrator | + ip_address = "192.168.16.14" 2026-01-08 00:02:29.105444 | orchestrator | + subnet_id = (known after apply) 2026-01-08 00:02:29.105448 | orchestrator | } 2026-01-08 00:02:29.105452 | orchestrator | } 2026-01-08 00:02:29.105457 | orchestrator | 2026-01-08 00:02:29.105461 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-08 00:02:29.105465 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-08 00:02:29.105469 | orchestrator | + admin_state_up = (known after apply) 2026-01-08 00:02:29.105473 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-08 00:02:29.105476 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-08 00:02:29.105480 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.105484 | orchestrator | + device_id = (known after apply) 2026-01-08 00:02:29.105488 | orchestrator | + device_owner = (known after apply) 2026-01-08 00:02:29.105491 | orchestrator | + dns_assignment = (known after apply) 2026-01-08 00:02:29.105495 | orchestrator | + dns_name = (known after apply) 2026-01-08 00:02:29.105499 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105503 | orchestrator | + mac_address = (known after apply) 2026-01-08 00:02:29.105506 | orchestrator | + network_id = (known after apply) 2026-01-08 00:02:29.105510 | orchestrator | + port_security_enabled = (known after apply) 2026-01-08 00:02:29.105514 | orchestrator | + qos_policy_id = (known after apply) 2026-01-08 00:02:29.105520 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105524 | orchestrator | + security_group_ids = (known after apply) 2026-01-08 00:02:29.105528 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.105532 | orchestrator | 2026-01-08 00:02:29.105535 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105539 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-08 00:02:29.105543 | orchestrator | } 2026-01-08 00:02:29.105547 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105550 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-08 00:02:29.105554 | orchestrator | } 2026-01-08 00:02:29.105558 | orchestrator | + allowed_address_pairs { 2026-01-08 00:02:29.105562 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-08 00:02:29.105566 | orchestrator | } 2026-01-08 00:02:29.105569 | orchestrator | 2026-01-08 00:02:29.105575 | orchestrator | + binding (known after apply) 2026-01-08 00:02:29.105579 | orchestrator | 2026-01-08 00:02:29.105583 | orchestrator | + fixed_ip { 2026-01-08 00:02:29.105587 | orchestrator | + ip_address = "192.168.16.15" 2026-01-08 00:02:29.105591 | orchestrator | + subnet_id = (known after apply) 2026-01-08 00:02:29.105594 | orchestrator | } 2026-01-08 00:02:29.105598 | orchestrator | } 2026-01-08 00:02:29.105603 | orchestrator | 2026-01-08 00:02:29.105607 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-08 00:02:29.105611 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-08 00:02:29.105615 | orchestrator | + force_destroy = false 2026-01-08 00:02:29.105619 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105622 | orchestrator | + port_id = (known after apply) 2026-01-08 00:02:29.105626 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105630 | orchestrator | + router_id = (known after apply) 2026-01-08 00:02:29.105634 | orchestrator | + subnet_id = (known after apply) 2026-01-08 00:02:29.105637 | orchestrator | } 2026-01-08 00:02:29.105641 | orchestrator | 2026-01-08 00:02:29.105645 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-08 00:02:29.105649 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-08 00:02:29.105653 | orchestrator | + admin_state_up = (known after apply) 2026-01-08 00:02:29.105656 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.105660 | orchestrator | + availability_zone_hints = [ 2026-01-08 00:02:29.105664 | orchestrator | + "nova", 2026-01-08 00:02:29.105668 | orchestrator | ] 2026-01-08 00:02:29.105671 | orchestrator | + distributed = (known after apply) 2026-01-08 00:02:29.105675 | orchestrator | + enable_snat = (known after apply) 2026-01-08 00:02:29.105679 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-08 00:02:29.105683 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-08 00:02:29.105687 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105690 | orchestrator | + name = "testbed" 2026-01-08 00:02:29.105694 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105698 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.105702 | orchestrator | 2026-01-08 00:02:29.105705 | orchestrator | + external_fixed_ip (known after apply) 2026-01-08 00:02:29.105709 | orchestrator | } 2026-01-08 00:02:29.105714 | orchestrator | 2026-01-08 00:02:29.105718 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-08 00:02:29.105723 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-08 00:02:29.105726 | orchestrator | + description = "ssh" 2026-01-08 00:02:29.105730 | orchestrator | + direction = "ingress" 2026-01-08 00:02:29.105734 | orchestrator | + ethertype = "IPv4" 2026-01-08 00:02:29.105738 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105742 | orchestrator | + port_range_max = 22 2026-01-08 00:02:29.105745 | orchestrator | + port_range_min = 22 2026-01-08 00:02:29.105749 | orchestrator | + protocol = "tcp" 2026-01-08 00:02:29.105753 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105759 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-08 00:02:29.105763 | orchestrator | + remote_group_id = (known after apply) 2026-01-08 00:02:29.105767 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-08 00:02:29.105771 | orchestrator | + security_group_id = (known after apply) 2026-01-08 00:02:29.105775 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.105778 | orchestrator | } 2026-01-08 00:02:29.105784 | orchestrator | 2026-01-08 00:02:29.105787 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-08 00:02:29.105791 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-08 00:02:29.105795 | orchestrator | + description = "wireguard" 2026-01-08 00:02:29.105799 | orchestrator | + direction = "ingress" 2026-01-08 00:02:29.105802 | orchestrator | + ethertype = "IPv4" 2026-01-08 00:02:29.105806 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105810 | orchestrator | + port_range_max = 51820 2026-01-08 00:02:29.105814 | orchestrator | + port_range_min = 51820 2026-01-08 00:02:29.105817 | orchestrator | + protocol = "udp" 2026-01-08 00:02:29.105821 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105825 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-08 00:02:29.105829 | orchestrator | + remote_group_id = (known after apply) 2026-01-08 00:02:29.105833 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-08 00:02:29.105836 | orchestrator | + security_group_id = (known after apply) 2026-01-08 00:02:29.105840 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.105844 | orchestrator | } 2026-01-08 00:02:29.105849 | orchestrator | 2026-01-08 00:02:29.105853 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-08 00:02:29.105857 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-08 00:02:29.105861 | orchestrator | + direction = "ingress" 2026-01-08 00:02:29.105864 | orchestrator | + ethertype = "IPv4" 2026-01-08 00:02:29.105868 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105872 | orchestrator | + protocol = "tcp" 2026-01-08 00:02:29.105876 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105879 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-08 00:02:29.105883 | orchestrator | + remote_group_id = (known after apply) 2026-01-08 00:02:29.105887 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-08 00:02:29.105891 | orchestrator | + security_group_id = (known after apply) 2026-01-08 00:02:29.105894 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.105898 | orchestrator | } 2026-01-08 00:02:29.105903 | orchestrator | 2026-01-08 00:02:29.105907 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-08 00:02:29.105911 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-08 00:02:29.105915 | orchestrator | + direction = "ingress" 2026-01-08 00:02:29.105919 | orchestrator | + ethertype = "IPv4" 2026-01-08 00:02:29.105923 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105926 | orchestrator | + protocol = "udp" 2026-01-08 00:02:29.105930 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105934 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-08 00:02:29.105938 | orchestrator | + remote_group_id = (known after apply) 2026-01-08 00:02:29.105941 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-08 00:02:29.105945 | orchestrator | + security_group_id = (known after apply) 2026-01-08 00:02:29.105949 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.105953 | orchestrator | } 2026-01-08 00:02:29.105958 | orchestrator | 2026-01-08 00:02:29.105962 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-08 00:02:29.105968 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-08 00:02:29.105972 | orchestrator | + direction = "ingress" 2026-01-08 00:02:29.105976 | orchestrator | + ethertype = "IPv4" 2026-01-08 00:02:29.105979 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.105991 | orchestrator | + protocol = "icmp" 2026-01-08 00:02:29.105995 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.105999 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-08 00:02:29.106003 | orchestrator | + remote_group_id = (known after apply) 2026-01-08 00:02:29.106007 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-08 00:02:29.106011 | orchestrator | + security_group_id = (known after apply) 2026-01-08 00:02:29.106194 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.106271 | orchestrator | } 2026-01-08 00:02:29.106347 | orchestrator | 2026-01-08 00:02:29.106421 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-08 00:02:29.106472 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-08 00:02:29.106520 | orchestrator | + direction = "ingress" 2026-01-08 00:02:29.106577 | orchestrator | + ethertype = "IPv4" 2026-01-08 00:02:29.106604 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.106617 | orchestrator | + protocol = "tcp" 2026-01-08 00:02:29.106621 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.106673 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-08 00:02:29.106750 | orchestrator | + remote_group_id = (known after apply) 2026-01-08 00:02:29.106850 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-08 00:02:29.106879 | orchestrator | + security_group_id = (known after apply) 2026-01-08 00:02:29.106927 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.106970 | orchestrator | } 2026-01-08 00:02:29.107038 | orchestrator | 2026-01-08 00:02:29.107178 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-08 00:02:29.107184 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-08 00:02:29.107206 | orchestrator | + direction = "ingress" 2026-01-08 00:02:29.107277 | orchestrator | + ethertype = "IPv4" 2026-01-08 00:02:29.107328 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.107396 | orchestrator | + protocol = "udp" 2026-01-08 00:02:29.107473 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.107576 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-08 00:02:29.107641 | orchestrator | + remote_group_id = (known after apply) 2026-01-08 00:02:29.107764 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-08 00:02:29.107821 | orchestrator | + security_group_id = (known after apply) 2026-01-08 00:02:29.107937 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.108046 | orchestrator | } 2026-01-08 00:02:29.108252 | orchestrator | 2026-01-08 00:02:29.108265 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-08 00:02:29.108269 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-08 00:02:29.108273 | orchestrator | + direction = "ingress" 2026-01-08 00:02:29.108280 | orchestrator | + ethertype = "IPv4" 2026-01-08 00:02:29.108284 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.108302 | orchestrator | + protocol = "icmp" 2026-01-08 00:02:29.108317 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.108330 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-08 00:02:29.108334 | orchestrator | + remote_group_id = (known after apply) 2026-01-08 00:02:29.108338 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-08 00:02:29.108341 | orchestrator | + security_group_id = (known after apply) 2026-01-08 00:02:29.108345 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.108357 | orchestrator | } 2026-01-08 00:02:29.108406 | orchestrator | 2026-01-08 00:02:29.108411 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-08 00:02:29.108431 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-08 00:02:29.108443 | orchestrator | + description = "vrrp" 2026-01-08 00:02:29.108447 | orchestrator | + direction = "ingress" 2026-01-08 00:02:29.108451 | orchestrator | + ethertype = "IPv4" 2026-01-08 00:02:29.108515 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.108520 | orchestrator | + protocol = "112" 2026-01-08 00:02:29.108556 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.108561 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-08 00:02:29.108595 | orchestrator | + remote_group_id = (known after apply) 2026-01-08 00:02:29.108607 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-08 00:02:29.108618 | orchestrator | + security_group_id = (known after apply) 2026-01-08 00:02:29.108623 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.108722 | orchestrator | } 2026-01-08 00:02:29.108750 | orchestrator | 2026-01-08 00:02:29.108755 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-08 00:02:29.108790 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-08 00:02:29.108795 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.108827 | orchestrator | + description = "management security group" 2026-01-08 00:02:29.108832 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.108917 | orchestrator | + name = "testbed-management" 2026-01-08 00:02:29.108956 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.108969 | orchestrator | + stateful = (known after apply) 2026-01-08 00:02:29.109065 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.109071 | orchestrator | } 2026-01-08 00:02:29.109129 | orchestrator | 2026-01-08 00:02:29.109133 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-08 00:02:29.109172 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-08 00:02:29.109177 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.109215 | orchestrator | + description = "node security group" 2026-01-08 00:02:29.109219 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.109258 | orchestrator | + name = "testbed-node" 2026-01-08 00:02:29.109262 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.109266 | orchestrator | + stateful = (known after apply) 2026-01-08 00:02:29.109269 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.109273 | orchestrator | } 2026-01-08 00:02:29.109294 | orchestrator | 2026-01-08 00:02:29.109300 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-08 00:02:29.109370 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-08 00:02:29.109384 | orchestrator | + all_tags = (known after apply) 2026-01-08 00:02:29.109395 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-08 00:02:29.109399 | orchestrator | + dns_nameservers = [ 2026-01-08 00:02:29.109428 | orchestrator | + "8.8.8.8", 2026-01-08 00:02:29.109467 | orchestrator | + "9.9.9.9", 2026-01-08 00:02:29.109472 | orchestrator | ] 2026-01-08 00:02:29.109627 | orchestrator | + enable_dhcp = true 2026-01-08 00:02:29.109632 | orchestrator | + gateway_ip = (known after apply) 2026-01-08 00:02:29.109668 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.109672 | orchestrator | + ip_version = 4 2026-01-08 00:02:29.109723 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-08 00:02:29.109737 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-08 00:02:29.109749 | orchestrator | + name = "subnet-testbed-management" 2026-01-08 00:02:29.109753 | orchestrator | + network_id = (known after apply) 2026-01-08 00:02:29.109839 | orchestrator | + no_gateway = false 2026-01-08 00:02:29.109845 | orchestrator | + region = (known after apply) 2026-01-08 00:02:29.109874 | orchestrator | + service_types = (known after apply) 2026-01-08 00:02:29.109914 | orchestrator | + tenant_id = (known after apply) 2026-01-08 00:02:29.109954 | orchestrator | 2026-01-08 00:02:29.110055 | orchestrator | + allocation_pool { 2026-01-08 00:02:29.110060 | orchestrator | + end = "192.168.31.250" 2026-01-08 00:02:29.110105 | orchestrator | + start = "192.168.31.200" 2026-01-08 00:02:29.110110 | orchestrator | } 2026-01-08 00:02:29.110179 | orchestrator | } 2026-01-08 00:02:29.110185 | orchestrator | 2026-01-08 00:02:29.110239 | orchestrator | # terraform_data.image will be created 2026-01-08 00:02:29.110285 | orchestrator | + resource "terraform_data" "image" { 2026-01-08 00:02:29.110299 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.110339 | orchestrator | + input = "Ubuntu 24.04" 2026-01-08 00:02:29.110394 | orchestrator | + output = (known after apply) 2026-01-08 00:02:29.110398 | orchestrator | } 2026-01-08 00:02:29.110463 | orchestrator | 2026-01-08 00:02:29.110499 | orchestrator | # terraform_data.image_node will be created 2026-01-08 00:02:29.110540 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-08 00:02:29.110545 | orchestrator | + id = (known after apply) 2026-01-08 00:02:29.110601 | orchestrator | + input = "Ubuntu 24.04" 2026-01-08 00:02:29.110606 | orchestrator | + output = (known after apply) 2026-01-08 00:02:29.110609 | orchestrator | } 2026-01-08 00:02:29.110613 | orchestrator | 2026-01-08 00:02:29.110663 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-08 00:02:29.110703 | orchestrator | 2026-01-08 00:02:29.110748 | orchestrator | Changes to Outputs: 2026-01-08 00:02:29.110752 | orchestrator | + manager_address = (sensitive value) 2026-01-08 00:02:29.110794 | orchestrator | + private_key = (sensitive value) 2026-01-08 00:02:29.352169 | orchestrator | terraform_data.image: Creating... 2026-01-08 00:02:29.352422 | orchestrator | terraform_data.image: Creation complete after 0s [id=e6dc0c1e-6d33-0f10-0385-e57410d73dbf] 2026-01-08 00:02:29.352542 | orchestrator | terraform_data.image_node: Creating... 2026-01-08 00:02:29.353746 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=668e449d-bfc8-fe09-64c0-efef471839e4] 2026-01-08 00:02:29.366727 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-08 00:02:29.367226 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-08 00:02:29.379640 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-08 00:02:29.379692 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-08 00:02:29.382272 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-08 00:02:29.383363 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-08 00:02:29.383684 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-08 00:02:29.385042 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-08 00:02:29.392701 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-08 00:02:29.394574 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-08 00:02:29.845091 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-08 00:02:29.845159 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-08 00:02:29.849876 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-08 00:02:29.849924 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-08 00:02:30.026235 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-08 00:02:30.032220 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-08 00:02:30.415596 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=19e99125-97b3-424c-ac18-278ec9aa22fd] 2026-01-08 00:02:30.426666 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-08 00:02:33.023971 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=0e18c249-c135-4dbc-997b-e877fb7ddadb] 2026-01-08 00:02:33.035082 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-08 00:02:33.043069 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=7a76cbc0-0137-4ff9-923d-b4b1dcc050dd] 2026-01-08 00:02:33.050061 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=ca658e2c-6884-4cb0-a984-2a30c4218b42] 2026-01-08 00:02:33.054372 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-08 00:02:33.061813 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=c2b7f2d1-b409-4164-9c96-a325340a2181] 2026-01-08 00:02:33.064358 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-08 00:02:33.068731 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-08 00:02:33.080912 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=6a42190a-2484-4e8d-b5ec-29d2f01455ea] 2026-01-08 00:02:33.089353 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-08 00:02:33.105956 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=c2709d8e-63c5-44e7-8dd5-568d2763b490] 2026-01-08 00:02:33.112484 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-08 00:02:33.139391 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=083917d3-ae8a-40d6-964f-5c24c6020ef0] 2026-01-08 00:02:33.145512 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=124f655d-2588-4acd-9ece-c76299342e82] 2026-01-08 00:02:33.165859 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-08 00:02:33.170142 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-08 00:02:33.172008 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=831c7f02cc7af63107e8ace33c6844ffa506ccac] 2026-01-08 00:02:33.174432 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=a50c0dfb03489605cd548efb87554005c9c8e29f] 2026-01-08 00:02:33.178207 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-08 00:02:33.246502 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=d8742191-d797-4305-9777-b3b4a7e3f85b] 2026-01-08 00:02:33.805860 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=33ab2247-e822-4059-8ff1-fecc56de3eb1] 2026-01-08 00:02:34.112543 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=5c294916-51b2-4dc4-99c0-9248b3b9c962] 2026-01-08 00:02:34.122803 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-08 00:02:36.456572 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=59a3115e-34af-4755-b21c-16d9ce2dd0b0] 2026-01-08 00:02:36.460939 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=2593653b-a2f8-487c-a2d2-926b6edd94aa] 2026-01-08 00:02:36.503113 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=59a47b62-1fe1-4b4a-8031-07381f636ec2] 2026-01-08 00:02:36.544286 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=f0f9a47d-1724-4561-93ab-95a64de5d531] 2026-01-08 00:02:36.557912 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=0ea0a032-e520-451d-8180-d4c0b00694e2] 2026-01-08 00:02:36.577757 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=b666bc2c-7776-4612-aae4-ea993c4606ba] 2026-01-08 00:02:37.619632 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=ed761acb-09ad-4249-8792-b13fb3c6b204] 2026-01-08 00:02:37.626650 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-08 00:02:37.627801 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-08 00:02:37.630144 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-08 00:02:37.823151 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f6b34fec-e81b-4aa1-8298-0708097f406c] 2026-01-08 00:02:37.826394 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9d133a22-5715-4498-b895-ddfb841f7e18] 2026-01-08 00:02:37.834963 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-08 00:02:37.835091 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-08 00:02:37.835104 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-08 00:02:37.838921 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-08 00:02:37.838978 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-08 00:02:37.841545 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-08 00:02:37.841693 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-08 00:02:37.843545 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-08 00:02:37.850726 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-08 00:02:37.997861 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=91558102-eb9a-471c-821a-b8f106286df3] 2026-01-08 00:02:37.999689 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=1ea0dc34-9b7f-45ca-9af2-5c459f386c50] 2026-01-08 00:02:38.010165 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-08 00:02:38.010848 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-08 00:02:38.148111 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=200c16d8-745f-4ea8-8d88-0b52cf850c27] 2026-01-08 00:02:38.162957 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-08 00:02:38.309850 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=0be90283-c4f9-40a9-a43a-4d4f248d030e] 2026-01-08 00:02:38.321643 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-08 00:02:38.644065 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=e5483431-1898-4ba1-ab58-805a479cd8e4] 2026-01-08 00:02:38.655078 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-08 00:02:38.852568 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=bad5d18b-17d3-48a8-b762-f8210a77ea89] 2026-01-08 00:02:38.867933 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-08 00:02:38.870141 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=c13ce862-fd87-4731-99ba-3c8fc56fc636] 2026-01-08 00:02:38.874482 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-08 00:02:38.906200 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=70a93737-a76f-48b4-868b-9e523ff11fa6] 2026-01-08 00:02:38.916244 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=1fc9f37b-eb43-4e58-a6b7-45e393944b10] 2026-01-08 00:02:38.924663 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=5fe1a5a5-8556-4636-a8dc-1cd976cfeaa1] 2026-01-08 00:02:39.146637 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=192cb316-09e9-44dd-b437-c2c18ba086b1] 2026-01-08 00:02:39.368073 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=f4963569-02aa-49af-a0a8-51ca33ab1958] 2026-01-08 00:02:39.373922 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=9fa04825-e9d9-41b9-9149-18c532c8d419] 2026-01-08 00:02:39.382625 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=aad43acc-7643-4a78-865e-991c86f36e41] 2026-01-08 00:02:39.569648 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=5322d3a9-9c30-401d-801c-b7e9147072ac] 2026-01-08 00:02:39.862096 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=d66d5b1e-c535-4ea6-9df3-2464e8fabc86] 2026-01-08 00:02:41.141560 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=9032c440-43db-4ee2-bb99-e420ea480011] 2026-01-08 00:02:41.162567 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-08 00:02:41.175826 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-08 00:02:41.176536 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-08 00:02:41.178824 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-08 00:02:41.179537 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-08 00:02:41.184501 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-08 00:02:41.204239 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-08 00:02:43.626347 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=03deaf70-696b-490f-83de-0c883cba5d89] 2026-01-08 00:02:43.635830 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-08 00:02:43.637306 | orchestrator | local_file.inventory: Creating... 2026-01-08 00:02:43.642690 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-08 00:02:43.643180 | orchestrator | local_file.inventory: Creation complete after 0s [id=1e9e789ff88c1b98e3aaa5002d8ef26e93fd9c73] 2026-01-08 00:02:43.648461 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=79fcda97a57264b7b956a271f4e1c465cdd2c67b] 2026-01-08 00:02:45.036949 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=03deaf70-696b-490f-83de-0c883cba5d89] 2026-01-08 00:02:51.179416 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-08 00:02:51.180520 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-08 00:02:51.180568 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-08 00:02:51.180588 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-08 00:02:51.193896 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-08 00:02:51.206135 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-08 00:03:01.181776 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-08 00:03:01.181901 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-08 00:03:01.181929 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-08 00:03:01.181940 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-08 00:03:01.195167 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-08 00:03:01.206384 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-08 00:03:11.190410 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-08 00:03:11.190567 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-08 00:03:11.190597 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-08 00:03:11.190634 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-08 00:03:11.195790 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-08 00:03:11.207319 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-08 00:03:12.111728 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=ff05416e-d561-4501-90a4-024f9f841a40] 2026-01-08 00:03:12.125575 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=a4237501-b7e4-430b-abd0-3a2bf741f4c9] 2026-01-08 00:03:21.199800 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-08 00:03:21.199909 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-08 00:03:21.199923 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-08 00:03:21.208378 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-08 00:03:21.921202 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=209075fd-fd53-44a0-911c-42bee6f10501] 2026-01-08 00:03:22.383711 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=849377fd-bfdf-4e83-be3d-6a1b951c4a05] 2026-01-08 00:03:22.645785 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 42s [id=c2ba5268-5b2d-40bf-87a2-e8a17d671431] 2026-01-08 00:03:31.209614 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-01-08 00:03:41.218822 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-01-08 00:03:42.786585 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m2s [id=5dc1ce34-e49a-4e81-9f67-0c963af4b063] 2026-01-08 00:03:42.815657 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-08 00:03:42.816385 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-08 00:03:42.817371 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-08 00:03:42.817535 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-08 00:03:42.831128 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-08 00:03:42.835791 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-08 00:03:42.839249 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=2660809271766923385] 2026-01-08 00:03:42.840529 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-08 00:03:42.856294 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-08 00:03:42.858217 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-08 00:03:42.861192 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-08 00:03:42.874610 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-08 00:03:46.211170 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=ff05416e-d561-4501-90a4-024f9f841a40/7a76cbc0-0137-4ff9-923d-b4b1dcc050dd] 2026-01-08 00:03:46.215086 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=c2ba5268-5b2d-40bf-87a2-e8a17d671431/0e18c249-c135-4dbc-997b-e877fb7ddadb] 2026-01-08 00:03:46.388094 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=c2ba5268-5b2d-40bf-87a2-e8a17d671431/6a42190a-2484-4e8d-b5ec-29d2f01455ea] 2026-01-08 00:03:46.416662 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=a4237501-b7e4-430b-abd0-3a2bf741f4c9/ca658e2c-6884-4cb0-a984-2a30c4218b42] 2026-01-08 00:03:46.481235 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=a4237501-b7e4-430b-abd0-3a2bf741f4c9/083917d3-ae8a-40d6-964f-5c24c6020ef0] 2026-01-08 00:03:47.761210 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=ff05416e-d561-4501-90a4-024f9f841a40/c2b7f2d1-b409-4164-9c96-a325340a2181] 2026-01-08 00:03:47.864698 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=ff05416e-d561-4501-90a4-024f9f841a40/d8742191-d797-4305-9777-b3b4a7e3f85b] 2026-01-08 00:03:52.495372 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=c2ba5268-5b2d-40bf-87a2-e8a17d671431/124f655d-2588-4acd-9ece-c76299342e82] 2026-01-08 00:03:52.767346 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=a4237501-b7e4-430b-abd0-3a2bf741f4c9/c2709d8e-63c5-44e7-8dd5-568d2763b490] 2026-01-08 00:03:52.876092 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-08 00:04:02.876515 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-08 00:04:03.469490 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=216f45b8-8122-4244-9cf4-540e7b9e67f7] 2026-01-08 00:04:03.497573 | orchestrator | 2026-01-08 00:04:03.497660 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-08 00:04:03.497721 | orchestrator | 2026-01-08 00:04:03.497732 | orchestrator | Outputs: 2026-01-08 00:04:03.497739 | orchestrator | 2026-01-08 00:04:03.497769 | orchestrator | manager_address = 2026-01-08 00:04:03.497777 | orchestrator | private_key = 2026-01-08 00:04:03.607388 | orchestrator | ok: Runtime: 0:01:41.717513 2026-01-08 00:04:03.632126 | 2026-01-08 00:04:03.632263 | TASK [Create infrastructure (stable)] 2026-01-08 00:04:04.177746 | orchestrator | skipping: Conditional result was False 2026-01-08 00:04:04.198259 | 2026-01-08 00:04:04.198452 | TASK [Fetch manager address] 2026-01-08 00:04:04.709837 | orchestrator | ok 2026-01-08 00:04:04.725743 | 2026-01-08 00:04:04.725955 | TASK [Set manager_host address] 2026-01-08 00:04:04.808044 | orchestrator | ok 2026-01-08 00:04:04.820017 | 2026-01-08 00:04:04.820169 | LOOP [Update ansible collections] 2026-01-08 00:04:05.900440 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-08 00:04:05.901816 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-08 00:04:05.901906 | orchestrator | Starting galaxy collection install process 2026-01-08 00:04:05.901944 | orchestrator | Process install dependency map 2026-01-08 00:04:05.901977 | orchestrator | Starting collection install process 2026-01-08 00:04:05.902007 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-01-08 00:04:05.902040 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-01-08 00:04:05.902081 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-08 00:04:05.902152 | orchestrator | ok: Item: commons Runtime: 0:00:00.701061 2026-01-08 00:04:07.053951 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-08 00:04:07.054125 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-08 00:04:07.054178 | orchestrator | Starting galaxy collection install process 2026-01-08 00:04:07.054219 | orchestrator | Process install dependency map 2026-01-08 00:04:07.054257 | orchestrator | Starting collection install process 2026-01-08 00:04:07.054293 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-01-08 00:04:07.054347 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-01-08 00:04:07.054382 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-08 00:04:07.054439 | orchestrator | ok: Item: services Runtime: 0:00:00.844964 2026-01-08 00:04:07.072525 | 2026-01-08 00:04:07.072707 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-08 00:04:17.657825 | orchestrator | ok 2026-01-08 00:04:17.670975 | 2026-01-08 00:04:17.671124 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-08 00:05:17.719412 | orchestrator | ok 2026-01-08 00:05:17.730011 | 2026-01-08 00:05:17.730166 | TASK [Fetch manager ssh hostkey] 2026-01-08 00:05:19.308600 | orchestrator | Output suppressed because no_log was given 2026-01-08 00:05:19.327153 | 2026-01-08 00:05:19.327350 | TASK [Get ssh keypair from terraform environment] 2026-01-08 00:05:19.866236 | orchestrator | ok: Runtime: 0:00:00.009335 2026-01-08 00:05:19.885824 | 2026-01-08 00:05:19.886003 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-08 00:05:19.936065 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-08 00:05:19.946062 | 2026-01-08 00:05:19.946216 | TASK [Run manager part 0] 2026-01-08 00:05:21.045642 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-08 00:05:21.112595 | orchestrator | 2026-01-08 00:05:21.112670 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-08 00:05:21.112679 | orchestrator | 2026-01-08 00:05:21.112692 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-08 00:05:23.069901 | orchestrator | ok: [testbed-manager] 2026-01-08 00:05:23.069997 | orchestrator | 2026-01-08 00:05:23.070067 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-08 00:05:23.070080 | orchestrator | 2026-01-08 00:05:23.070089 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 00:05:25.172121 | orchestrator | ok: [testbed-manager] 2026-01-08 00:05:25.172231 | orchestrator | 2026-01-08 00:05:25.172245 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-08 00:05:25.902907 | orchestrator | ok: [testbed-manager] 2026-01-08 00:05:25.903029 | orchestrator | 2026-01-08 00:05:25.903039 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-08 00:05:25.954959 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:05:25.955019 | orchestrator | 2026-01-08 00:05:25.955032 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-08 00:05:25.989185 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:05:25.989245 | orchestrator | 2026-01-08 00:05:25.989252 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-08 00:05:26.027983 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:05:26.028040 | orchestrator | 2026-01-08 00:05:26.028136 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-08 00:05:26.063471 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:05:26.063537 | orchestrator | 2026-01-08 00:05:26.063547 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-08 00:05:26.097568 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:05:26.097620 | orchestrator | 2026-01-08 00:05:26.097627 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-08 00:05:26.134732 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:05:26.134781 | orchestrator | 2026-01-08 00:05:26.134789 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-08 00:05:26.165079 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:05:26.165156 | orchestrator | 2026-01-08 00:05:26.165167 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-08 00:05:26.925892 | orchestrator | changed: [testbed-manager] 2026-01-08 00:05:26.925932 | orchestrator | 2026-01-08 00:05:26.925938 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-08 00:08:24.354483 | orchestrator | changed: [testbed-manager] 2026-01-08 00:08:24.354541 | orchestrator | 2026-01-08 00:08:24.354554 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-08 00:10:11.360134 | orchestrator | changed: [testbed-manager] 2026-01-08 00:10:11.360246 | orchestrator | 2026-01-08 00:10:11.360263 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-08 00:10:36.062524 | orchestrator | changed: [testbed-manager] 2026-01-08 00:10:36.062588 | orchestrator | 2026-01-08 00:10:36.062599 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-08 00:10:45.357822 | orchestrator | changed: [testbed-manager] 2026-01-08 00:10:45.357948 | orchestrator | 2026-01-08 00:10:45.357968 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-08 00:10:45.399675 | orchestrator | ok: [testbed-manager] 2026-01-08 00:10:45.399756 | orchestrator | 2026-01-08 00:10:45.399771 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-08 00:10:46.229140 | orchestrator | ok: [testbed-manager] 2026-01-08 00:10:46.229177 | orchestrator | 2026-01-08 00:10:46.229188 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-08 00:10:46.955557 | orchestrator | changed: [testbed-manager] 2026-01-08 00:10:46.955600 | orchestrator | 2026-01-08 00:10:46.955609 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-08 00:10:54.472340 | orchestrator | changed: [testbed-manager] 2026-01-08 00:10:54.472385 | orchestrator | 2026-01-08 00:10:54.472409 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-08 00:11:00.691329 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:00.691422 | orchestrator | 2026-01-08 00:11:00.691444 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-08 00:11:03.486948 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:03.487046 | orchestrator | 2026-01-08 00:11:03.487062 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-08 00:11:05.299155 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:05.299267 | orchestrator | 2026-01-08 00:11:05.299284 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-08 00:11:06.525928 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-08 00:11:06.526109 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-08 00:11:06.526131 | orchestrator | 2026-01-08 00:11:06.526144 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-08 00:11:06.573774 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-08 00:11:06.573884 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-08 00:11:06.573901 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-08 00:11:06.573914 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-08 00:11:15.213701 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-08 00:11:15.213838 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-08 00:11:15.213856 | orchestrator | 2026-01-08 00:11:15.213869 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-08 00:11:15.830889 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:15.830984 | orchestrator | 2026-01-08 00:11:15.831006 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-08 00:11:35.810614 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-08 00:11:35.810722 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-08 00:11:35.810744 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-08 00:11:35.810758 | orchestrator | 2026-01-08 00:11:35.810773 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-08 00:11:38.214653 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-08 00:11:38.214718 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-08 00:11:38.214732 | orchestrator | 2026-01-08 00:11:38.214745 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-08 00:11:38.214757 | orchestrator | 2026-01-08 00:11:38.214769 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 00:11:39.656643 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:39.656843 | orchestrator | 2026-01-08 00:11:39.656865 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-08 00:11:39.703427 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:39.703504 | orchestrator | 2026-01-08 00:11:39.703519 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-08 00:11:39.775019 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:39.775093 | orchestrator | 2026-01-08 00:11:39.775106 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-08 00:11:40.556807 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:40.556900 | orchestrator | 2026-01-08 00:11:40.556916 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-08 00:11:41.402986 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:41.403836 | orchestrator | 2026-01-08 00:11:41.403864 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-08 00:11:42.820759 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-08 00:11:42.820854 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-08 00:11:42.820869 | orchestrator | 2026-01-08 00:11:42.820900 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-08 00:11:44.280431 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:44.280491 | orchestrator | 2026-01-08 00:11:44.280500 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-08 00:11:46.132180 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-08 00:11:46.132219 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-08 00:11:46.132226 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-08 00:11:46.132231 | orchestrator | 2026-01-08 00:11:46.132239 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-08 00:11:46.194248 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:11:46.194290 | orchestrator | 2026-01-08 00:11:46.194299 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-08 00:11:46.279428 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:11:46.279471 | orchestrator | 2026-01-08 00:11:46.279481 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-08 00:11:46.869487 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:46.869575 | orchestrator | 2026-01-08 00:11:46.869592 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-08 00:11:46.943723 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:11:46.943853 | orchestrator | 2026-01-08 00:11:46.943878 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-08 00:11:47.886315 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-08 00:11:47.886399 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:47.886415 | orchestrator | 2026-01-08 00:11:47.886428 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-08 00:11:47.922364 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:11:47.922443 | orchestrator | 2026-01-08 00:11:47.922458 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-08 00:11:47.956686 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:11:47.956757 | orchestrator | 2026-01-08 00:11:47.956773 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-08 00:11:47.989181 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:11:47.989227 | orchestrator | 2026-01-08 00:11:47.989238 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-08 00:11:48.061462 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:11:48.061539 | orchestrator | 2026-01-08 00:11:48.061553 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-08 00:11:48.801502 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:48.801536 | orchestrator | 2026-01-08 00:11:48.801541 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-08 00:11:48.801547 | orchestrator | 2026-01-08 00:11:48.801551 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 00:11:50.209162 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:50.209197 | orchestrator | 2026-01-08 00:11:50.209203 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-08 00:11:51.174273 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:51.174359 | orchestrator | 2026-01-08 00:11:51.174376 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:11:51.174390 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-08 00:11:51.174402 | orchestrator | 2026-01-08 00:11:51.711826 | orchestrator | ok: Runtime: 0:06:31.039431 2026-01-08 00:11:51.741753 | 2026-01-08 00:11:51.741956 | TASK [Point out that the log in on the manager is now possible] 2026-01-08 00:11:51.792005 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-08 00:11:51.802334 | 2026-01-08 00:11:51.802488 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-08 00:11:51.851334 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-08 00:11:51.861338 | 2026-01-08 00:11:51.861494 | TASK [Run manager part 1 + 2] 2026-01-08 00:11:53.128894 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-08 00:11:53.198854 | orchestrator | 2026-01-08 00:11:53.198899 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-08 00:11:53.198906 | orchestrator | 2026-01-08 00:11:53.198918 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 00:11:55.885575 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:55.885624 | orchestrator | 2026-01-08 00:11:55.885646 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-08 00:11:55.909366 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:11:55.909406 | orchestrator | 2026-01-08 00:11:55.909414 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-08 00:11:55.937574 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:55.937616 | orchestrator | 2026-01-08 00:11:55.937623 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-08 00:11:55.982411 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:55.982451 | orchestrator | 2026-01-08 00:11:55.982458 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-08 00:11:56.040247 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:56.040293 | orchestrator | 2026-01-08 00:11:56.040301 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-08 00:11:56.099144 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:56.099220 | orchestrator | 2026-01-08 00:11:56.099234 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-08 00:11:56.148683 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-08 00:11:56.148828 | orchestrator | 2026-01-08 00:11:56.148860 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-08 00:11:56.899228 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:56.899297 | orchestrator | 2026-01-08 00:11:56.899307 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-08 00:11:56.940327 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:11:56.940385 | orchestrator | 2026-01-08 00:11:56.940392 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-08 00:11:58.233006 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:58.233048 | orchestrator | 2026-01-08 00:11:58.233056 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-08 00:11:58.768862 | orchestrator | ok: [testbed-manager] 2026-01-08 00:11:58.768926 | orchestrator | 2026-01-08 00:11:58.768941 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-08 00:11:59.898606 | orchestrator | changed: [testbed-manager] 2026-01-08 00:11:59.898646 | orchestrator | 2026-01-08 00:11:59.898654 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-08 00:12:15.442906 | orchestrator | changed: [testbed-manager] 2026-01-08 00:12:15.442979 | orchestrator | 2026-01-08 00:12:15.442996 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-08 00:12:16.135668 | orchestrator | ok: [testbed-manager] 2026-01-08 00:12:16.135733 | orchestrator | 2026-01-08 00:12:16.135779 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-08 00:12:16.192107 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:12:16.192162 | orchestrator | 2026-01-08 00:12:16.192175 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-08 00:12:17.189871 | orchestrator | changed: [testbed-manager] 2026-01-08 00:12:17.189932 | orchestrator | 2026-01-08 00:12:17.189946 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-08 00:12:18.109627 | orchestrator | changed: [testbed-manager] 2026-01-08 00:12:18.109672 | orchestrator | 2026-01-08 00:12:18.109683 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-08 00:12:18.693674 | orchestrator | changed: [testbed-manager] 2026-01-08 00:12:18.693816 | orchestrator | 2026-01-08 00:12:18.693834 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-08 00:12:18.734099 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-08 00:12:18.734208 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-08 00:12:18.734249 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-08 00:12:18.734262 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-08 00:12:23.128773 | orchestrator | changed: [testbed-manager] 2026-01-08 00:12:23.129127 | orchestrator | 2026-01-08 00:12:23.129156 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-08 00:12:32.170158 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-08 00:12:32.170477 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-08 00:12:32.170516 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-08 00:12:32.170538 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-08 00:12:32.170569 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-08 00:12:32.170582 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-08 00:12:32.170594 | orchestrator | 2026-01-08 00:12:32.170606 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-08 00:12:33.252502 | orchestrator | changed: [testbed-manager] 2026-01-08 00:12:33.252582 | orchestrator | 2026-01-08 00:12:33.252598 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-08 00:12:33.292050 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:12:33.292250 | orchestrator | 2026-01-08 00:12:33.292271 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-08 00:12:36.516716 | orchestrator | changed: [testbed-manager] 2026-01-08 00:12:36.516780 | orchestrator | 2026-01-08 00:12:36.516788 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-08 00:12:36.554960 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:12:36.555038 | orchestrator | 2026-01-08 00:12:36.555052 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-08 00:14:16.550575 | orchestrator | changed: [testbed-manager] 2026-01-08 00:14:16.550619 | orchestrator | 2026-01-08 00:14:16.550628 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-08 00:14:17.770290 | orchestrator | ok: [testbed-manager] 2026-01-08 00:14:17.770331 | orchestrator | 2026-01-08 00:14:17.770336 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:14:17.770342 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-08 00:14:17.770347 | orchestrator | 2026-01-08 00:14:17.986219 | orchestrator | ok: Runtime: 0:02:25.699709 2026-01-08 00:14:18.003893 | 2026-01-08 00:14:18.004099 | TASK [Reboot manager] 2026-01-08 00:14:19.544795 | orchestrator | ok: Runtime: 0:00:00.988032 2026-01-08 00:14:19.561967 | 2026-01-08 00:14:19.562151 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-08 00:14:35.965326 | orchestrator | ok 2026-01-08 00:14:35.978170 | 2026-01-08 00:14:35.978397 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-08 00:15:36.036187 | orchestrator | ok 2026-01-08 00:15:36.049185 | 2026-01-08 00:15:36.049377 | TASK [Deploy manager + bootstrap nodes] 2026-01-08 00:15:38.767491 | orchestrator | 2026-01-08 00:15:38.767689 | orchestrator | # DEPLOY MANAGER 2026-01-08 00:15:38.767790 | orchestrator | 2026-01-08 00:15:38.767806 | orchestrator | + set -e 2026-01-08 00:15:38.767818 | orchestrator | + echo 2026-01-08 00:15:38.767831 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-08 00:15:38.767847 | orchestrator | + echo 2026-01-08 00:15:38.767891 | orchestrator | + cat /opt/manager-vars.sh 2026-01-08 00:15:38.771070 | orchestrator | export NUMBER_OF_NODES=6 2026-01-08 00:15:38.771095 | orchestrator | 2026-01-08 00:15:38.771107 | orchestrator | export CEPH_VERSION=reef 2026-01-08 00:15:38.771119 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-08 00:15:38.771130 | orchestrator | export MANAGER_VERSION=latest 2026-01-08 00:15:38.771151 | orchestrator | export OPENSTACK_VERSION=2025.1 2026-01-08 00:15:38.771161 | orchestrator | 2026-01-08 00:15:38.771176 | orchestrator | export ARA=false 2026-01-08 00:15:38.771187 | orchestrator | export DEPLOY_MODE=manager 2026-01-08 00:15:38.771203 | orchestrator | export TEMPEST=true 2026-01-08 00:15:38.771213 | orchestrator | export IS_ZUUL=true 2026-01-08 00:15:38.771222 | orchestrator | 2026-01-08 00:15:38.771239 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 00:15:38.771250 | orchestrator | export EXTERNAL_API=false 2026-01-08 00:15:38.771259 | orchestrator | 2026-01-08 00:15:38.771269 | orchestrator | export IMAGE_USER=ubuntu 2026-01-08 00:15:38.771282 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-08 00:15:38.771292 | orchestrator | 2026-01-08 00:15:38.771301 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-08 00:15:38.771349 | orchestrator | 2026-01-08 00:15:38.771361 | orchestrator | + echo 2026-01-08 00:15:38.771373 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-08 00:15:38.772402 | orchestrator | ++ export INTERACTIVE=false 2026-01-08 00:15:38.772418 | orchestrator | ++ INTERACTIVE=false 2026-01-08 00:15:38.772430 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-08 00:15:38.772440 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-08 00:15:38.773036 | orchestrator | + source /opt/manager-vars.sh 2026-01-08 00:15:38.773050 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-08 00:15:38.773060 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-08 00:15:38.773070 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-08 00:15:38.773079 | orchestrator | ++ CEPH_VERSION=reef 2026-01-08 00:15:38.773089 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-08 00:15:38.773099 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-08 00:15:38.773109 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-08 00:15:38.773118 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-08 00:15:38.773128 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-08 00:15:38.773146 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-08 00:15:38.773160 | orchestrator | ++ export ARA=false 2026-01-08 00:15:38.773170 | orchestrator | ++ ARA=false 2026-01-08 00:15:38.773180 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-08 00:15:38.773190 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-08 00:15:38.773199 | orchestrator | ++ export TEMPEST=true 2026-01-08 00:15:38.773209 | orchestrator | ++ TEMPEST=true 2026-01-08 00:15:38.773218 | orchestrator | ++ export IS_ZUUL=true 2026-01-08 00:15:38.773228 | orchestrator | ++ IS_ZUUL=true 2026-01-08 00:15:38.773238 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 00:15:38.773248 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 00:15:38.773257 | orchestrator | ++ export EXTERNAL_API=false 2026-01-08 00:15:38.773267 | orchestrator | ++ EXTERNAL_API=false 2026-01-08 00:15:38.773277 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-08 00:15:38.773286 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-08 00:15:38.773296 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-08 00:15:38.773306 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-08 00:15:38.773316 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-08 00:15:38.773326 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-08 00:15:38.773335 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-08 00:15:38.833295 | orchestrator | + docker version 2026-01-08 00:15:39.138586 | orchestrator | Client: Docker Engine - Community 2026-01-08 00:15:39.138732 | orchestrator | Version: 27.5.1 2026-01-08 00:15:39.138753 | orchestrator | API version: 1.47 2026-01-08 00:15:39.138774 | orchestrator | Go version: go1.22.11 2026-01-08 00:15:39.138793 | orchestrator | Git commit: 9f9e405 2026-01-08 00:15:39.138812 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-08 00:15:39.138833 | orchestrator | OS/Arch: linux/amd64 2026-01-08 00:15:39.138851 | orchestrator | Context: default 2026-01-08 00:15:39.138868 | orchestrator | 2026-01-08 00:15:39.138887 | orchestrator | Server: Docker Engine - Community 2026-01-08 00:15:39.138906 | orchestrator | Engine: 2026-01-08 00:15:39.138922 | orchestrator | Version: 27.5.1 2026-01-08 00:15:39.138941 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-08 00:15:39.138995 | orchestrator | Go version: go1.22.11 2026-01-08 00:15:39.139017 | orchestrator | Git commit: 4c9b3b0 2026-01-08 00:15:39.139036 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-08 00:15:39.139054 | orchestrator | OS/Arch: linux/amd64 2026-01-08 00:15:39.139070 | orchestrator | Experimental: false 2026-01-08 00:15:39.139081 | orchestrator | containerd: 2026-01-08 00:15:39.139091 | orchestrator | Version: v2.2.1 2026-01-08 00:15:39.139103 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-08 00:15:39.139115 | orchestrator | runc: 2026-01-08 00:15:39.139126 | orchestrator | Version: 1.3.4 2026-01-08 00:15:39.139137 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-08 00:15:39.139148 | orchestrator | docker-init: 2026-01-08 00:15:39.139172 | orchestrator | Version: 0.19.0 2026-01-08 00:15:39.139185 | orchestrator | GitCommit: de40ad0 2026-01-08 00:15:39.141814 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-08 00:15:39.152986 | orchestrator | + set -e 2026-01-08 00:15:39.153077 | orchestrator | + source /opt/manager-vars.sh 2026-01-08 00:15:39.153100 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-08 00:15:39.153120 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-08 00:15:39.153138 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-08 00:15:39.153157 | orchestrator | ++ CEPH_VERSION=reef 2026-01-08 00:15:39.153176 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-08 00:15:39.153196 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-08 00:15:39.153215 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-08 00:15:39.153235 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-08 00:15:39.153254 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-08 00:15:39.153274 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-08 00:15:39.153305 | orchestrator | ++ export ARA=false 2026-01-08 00:15:39.153321 | orchestrator | ++ ARA=false 2026-01-08 00:15:39.153332 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-08 00:15:39.153344 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-08 00:15:39.153355 | orchestrator | ++ export TEMPEST=true 2026-01-08 00:15:39.153365 | orchestrator | ++ TEMPEST=true 2026-01-08 00:15:39.153376 | orchestrator | ++ export IS_ZUUL=true 2026-01-08 00:15:39.153387 | orchestrator | ++ IS_ZUUL=true 2026-01-08 00:15:39.153398 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 00:15:39.153409 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 00:15:39.153420 | orchestrator | ++ export EXTERNAL_API=false 2026-01-08 00:15:39.153431 | orchestrator | ++ EXTERNAL_API=false 2026-01-08 00:15:39.153442 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-08 00:15:39.153452 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-08 00:15:39.153463 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-08 00:15:39.153474 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-08 00:15:39.153485 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-08 00:15:39.153496 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-08 00:15:39.153507 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-08 00:15:39.153518 | orchestrator | ++ export INTERACTIVE=false 2026-01-08 00:15:39.153529 | orchestrator | ++ INTERACTIVE=false 2026-01-08 00:15:39.153539 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-08 00:15:39.153553 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-08 00:15:39.153568 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-08 00:15:39.153579 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-08 00:15:39.153590 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-01-08 00:15:39.161317 | orchestrator | + set -e 2026-01-08 00:15:39.161367 | orchestrator | + VERSION=reef 2026-01-08 00:15:39.162671 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-08 00:15:39.170648 | orchestrator | + [[ -n ceph_version: reef ]] 2026-01-08 00:15:39.170726 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-01-08 00:15:39.177375 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2025.1 2026-01-08 00:15:39.184379 | orchestrator | + set -e 2026-01-08 00:15:39.184416 | orchestrator | + VERSION=2025.1 2026-01-08 00:15:39.185706 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-01-08 00:15:39.189755 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-01-08 00:15:39.189790 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2025.1/g' /opt/configuration/environments/manager/configuration.yml 2026-01-08 00:15:39.196356 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-08 00:15:39.197315 | orchestrator | ++ semver latest 7.0.0 2026-01-08 00:15:39.264484 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-08 00:15:39.264587 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-08 00:15:39.264646 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-08 00:15:39.265678 | orchestrator | ++ semver latest 10.0.0-0 2026-01-08 00:15:39.326774 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-08 00:15:39.327258 | orchestrator | ++ semver 2025.1 2025.1 2026-01-08 00:15:39.414415 | orchestrator | + [[ 0 -ge 0 ]] 2026-01-08 00:15:39.414512 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-08 00:15:39.421626 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-08 00:15:39.427033 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-08 00:15:39.525508 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-08 00:15:39.532637 | orchestrator | + source /opt/venv/bin/activate 2026-01-08 00:15:39.534202 | orchestrator | ++ deactivate nondestructive 2026-01-08 00:15:39.534229 | orchestrator | ++ '[' -n '' ']' 2026-01-08 00:15:39.534242 | orchestrator | ++ '[' -n '' ']' 2026-01-08 00:15:39.534253 | orchestrator | ++ hash -r 2026-01-08 00:15:39.534270 | orchestrator | ++ '[' -n '' ']' 2026-01-08 00:15:39.534281 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-08 00:15:39.534292 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-08 00:15:39.534303 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-08 00:15:39.534572 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-08 00:15:39.534597 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-08 00:15:39.534628 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-08 00:15:39.534640 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-08 00:15:39.534653 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-08 00:15:39.534685 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-08 00:15:39.534698 | orchestrator | ++ export PATH 2026-01-08 00:15:39.534709 | orchestrator | ++ '[' -n '' ']' 2026-01-08 00:15:39.534720 | orchestrator | ++ '[' -z '' ']' 2026-01-08 00:15:39.534731 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-08 00:15:39.534748 | orchestrator | ++ PS1='(venv) ' 2026-01-08 00:15:39.534760 | orchestrator | ++ export PS1 2026-01-08 00:15:39.534871 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-08 00:15:39.534887 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-08 00:15:39.534898 | orchestrator | ++ hash -r 2026-01-08 00:15:39.534913 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-08 00:15:40.853243 | orchestrator | 2026-01-08 00:15:40.853322 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-08 00:15:40.853334 | orchestrator | 2026-01-08 00:15:40.853343 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-08 00:15:41.443888 | orchestrator | ok: [testbed-manager] 2026-01-08 00:15:41.443987 | orchestrator | 2026-01-08 00:15:41.444004 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-08 00:15:42.455684 | orchestrator | changed: [testbed-manager] 2026-01-08 00:15:42.455793 | orchestrator | 2026-01-08 00:15:42.455815 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-08 00:15:42.455828 | orchestrator | 2026-01-08 00:15:42.455840 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 00:15:44.859434 | orchestrator | ok: [testbed-manager] 2026-01-08 00:15:44.859565 | orchestrator | 2026-01-08 00:15:44.859583 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-08 00:15:44.921848 | orchestrator | ok: [testbed-manager] 2026-01-08 00:15:44.921942 | orchestrator | 2026-01-08 00:15:44.921959 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-08 00:15:45.408329 | orchestrator | changed: [testbed-manager] 2026-01-08 00:15:45.408443 | orchestrator | 2026-01-08 00:15:45.408469 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-08 00:15:45.444407 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:15:45.444500 | orchestrator | 2026-01-08 00:15:45.444516 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-08 00:15:45.804850 | orchestrator | changed: [testbed-manager] 2026-01-08 00:15:45.804962 | orchestrator | 2026-01-08 00:15:45.804980 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-08 00:15:45.868373 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:15:45.868445 | orchestrator | 2026-01-08 00:15:45.868458 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-08 00:15:46.219964 | orchestrator | ok: [testbed-manager] 2026-01-08 00:15:46.220056 | orchestrator | 2026-01-08 00:15:46.220072 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-08 00:15:46.354803 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:15:46.354897 | orchestrator | 2026-01-08 00:15:46.354912 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-08 00:15:46.354924 | orchestrator | 2026-01-08 00:15:46.354936 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 00:15:48.110850 | orchestrator | ok: [testbed-manager] 2026-01-08 00:15:48.110955 | orchestrator | 2026-01-08 00:15:48.110972 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-08 00:15:48.215260 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-08 00:15:48.215383 | orchestrator | 2026-01-08 00:15:48.215400 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-08 00:15:48.292194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-08 00:15:48.292286 | orchestrator | 2026-01-08 00:15:48.292301 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-08 00:15:49.462722 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-08 00:15:49.462951 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-08 00:15:49.462974 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-08 00:15:49.462987 | orchestrator | 2026-01-08 00:15:49.463001 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-08 00:15:51.350242 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-08 00:15:51.350346 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-08 00:15:51.350362 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-08 00:15:51.350375 | orchestrator | 2026-01-08 00:15:51.350388 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-08 00:15:52.009032 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-08 00:15:52.009146 | orchestrator | changed: [testbed-manager] 2026-01-08 00:15:52.009170 | orchestrator | 2026-01-08 00:15:52.009189 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-08 00:15:52.663100 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-08 00:15:52.663194 | orchestrator | changed: [testbed-manager] 2026-01-08 00:15:52.663210 | orchestrator | 2026-01-08 00:15:52.663223 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-08 00:15:52.726258 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:15:52.726347 | orchestrator | 2026-01-08 00:15:52.726362 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-08 00:15:53.087441 | orchestrator | ok: [testbed-manager] 2026-01-08 00:15:53.087540 | orchestrator | 2026-01-08 00:15:53.087557 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-08 00:15:53.154240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-08 00:15:53.154318 | orchestrator | 2026-01-08 00:15:53.154328 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-08 00:15:54.320019 | orchestrator | changed: [testbed-manager] 2026-01-08 00:15:54.320131 | orchestrator | 2026-01-08 00:15:54.320149 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-08 00:15:55.161903 | orchestrator | changed: [testbed-manager] 2026-01-08 00:15:55.161990 | orchestrator | 2026-01-08 00:15:55.162001 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-08 00:16:05.890754 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:05.890864 | orchestrator | 2026-01-08 00:16:05.890882 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-08 00:16:05.936108 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:16:05.936172 | orchestrator | 2026-01-08 00:16:05.936185 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-08 00:16:05.936197 | orchestrator | 2026-01-08 00:16:05.936209 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 00:16:08.819288 | orchestrator | ok: [testbed-manager] 2026-01-08 00:16:08.819368 | orchestrator | 2026-01-08 00:16:08.819378 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-08 00:16:08.955961 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-08 00:16:08.956032 | orchestrator | 2026-01-08 00:16:08.956044 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-08 00:16:09.017158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-08 00:16:09.017272 | orchestrator | 2026-01-08 00:16:09.017298 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-08 00:16:11.752133 | orchestrator | ok: [testbed-manager] 2026-01-08 00:16:11.752237 | orchestrator | 2026-01-08 00:16:11.752255 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-08 00:16:11.808916 | orchestrator | ok: [testbed-manager] 2026-01-08 00:16:11.809012 | orchestrator | 2026-01-08 00:16:11.809027 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-08 00:16:11.943705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-08 00:16:11.943808 | orchestrator | 2026-01-08 00:16:11.943825 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-08 00:16:14.874540 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-08 00:16:14.874712 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-08 00:16:14.874731 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-08 00:16:14.874743 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-08 00:16:14.874755 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-08 00:16:14.874767 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-08 00:16:14.874778 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-08 00:16:14.874790 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-08 00:16:14.874802 | orchestrator | 2026-01-08 00:16:14.874815 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-08 00:16:15.547806 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:15.547902 | orchestrator | 2026-01-08 00:16:15.547926 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-08 00:16:16.202707 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:16.202809 | orchestrator | 2026-01-08 00:16:16.202826 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-08 00:16:16.276865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-08 00:16:16.276993 | orchestrator | 2026-01-08 00:16:16.277022 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-08 00:16:17.553412 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-08 00:16:17.553489 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-08 00:16:17.553500 | orchestrator | 2026-01-08 00:16:17.553511 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-08 00:16:18.224886 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:18.224983 | orchestrator | 2026-01-08 00:16:18.225002 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-08 00:16:18.275774 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:16:18.275845 | orchestrator | 2026-01-08 00:16:18.275859 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-08 00:16:18.349139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-08 00:16:18.349248 | orchestrator | 2026-01-08 00:16:18.349263 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-08 00:16:19.025989 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:19.026183 | orchestrator | 2026-01-08 00:16:19.026201 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-08 00:16:19.085842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-08 00:16:19.085943 | orchestrator | 2026-01-08 00:16:19.085961 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-08 00:16:20.491639 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-08 00:16:20.491735 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-08 00:16:20.491751 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:20.491764 | orchestrator | 2026-01-08 00:16:20.491775 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-08 00:16:21.145062 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:21.145145 | orchestrator | 2026-01-08 00:16:21.145159 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-08 00:16:21.207536 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:16:21.207612 | orchestrator | 2026-01-08 00:16:21.207643 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-08 00:16:21.311106 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-08 00:16:21.311179 | orchestrator | 2026-01-08 00:16:21.311187 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-08 00:16:21.876121 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:21.876236 | orchestrator | 2026-01-08 00:16:21.876265 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-08 00:16:22.320969 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:22.321038 | orchestrator | 2026-01-08 00:16:22.321055 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-08 00:16:23.583138 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-08 00:16:23.583246 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-08 00:16:23.583271 | orchestrator | 2026-01-08 00:16:23.583292 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-08 00:16:24.242905 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:24.242997 | orchestrator | 2026-01-08 00:16:24.243015 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-08 00:16:24.656977 | orchestrator | ok: [testbed-manager] 2026-01-08 00:16:24.657067 | orchestrator | 2026-01-08 00:16:24.657084 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-08 00:16:25.034120 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:25.034204 | orchestrator | 2026-01-08 00:16:25.034220 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-08 00:16:25.073805 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:16:25.073906 | orchestrator | 2026-01-08 00:16:25.073932 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-08 00:16:25.143443 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-08 00:16:25.143540 | orchestrator | 2026-01-08 00:16:25.143559 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-08 00:16:25.186815 | orchestrator | ok: [testbed-manager] 2026-01-08 00:16:25.186890 | orchestrator | 2026-01-08 00:16:25.186899 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-08 00:16:27.272829 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-08 00:16:27.272937 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-08 00:16:27.272955 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-08 00:16:27.272967 | orchestrator | 2026-01-08 00:16:27.272982 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-08 00:16:27.985186 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:27.985315 | orchestrator | 2026-01-08 00:16:27.985333 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-08 00:16:28.760470 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:28.760628 | orchestrator | 2026-01-08 00:16:28.760660 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-08 00:16:29.485553 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:29.485732 | orchestrator | 2026-01-08 00:16:29.485753 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-08 00:16:29.564978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-08 00:16:29.565072 | orchestrator | 2026-01-08 00:16:29.565088 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-08 00:16:29.607554 | orchestrator | ok: [testbed-manager] 2026-01-08 00:16:29.607677 | orchestrator | 2026-01-08 00:16:29.607694 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-08 00:16:30.345373 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-08 00:16:30.345480 | orchestrator | 2026-01-08 00:16:30.345498 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-08 00:16:30.422140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-08 00:16:30.422230 | orchestrator | 2026-01-08 00:16:30.422245 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-08 00:16:31.135697 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:31.135795 | orchestrator | 2026-01-08 00:16:31.135812 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-08 00:16:31.793551 | orchestrator | ok: [testbed-manager] 2026-01-08 00:16:31.793728 | orchestrator | 2026-01-08 00:16:31.793758 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-08 00:16:31.840731 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:16:31.840789 | orchestrator | 2026-01-08 00:16:31.840795 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-08 00:16:31.896544 | orchestrator | ok: [testbed-manager] 2026-01-08 00:16:31.896667 | orchestrator | 2026-01-08 00:16:31.896685 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-08 00:16:32.738335 | orchestrator | changed: [testbed-manager] 2026-01-08 00:16:32.738433 | orchestrator | 2026-01-08 00:16:32.738448 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-08 00:17:44.027875 | orchestrator | changed: [testbed-manager] 2026-01-08 00:17:44.028009 | orchestrator | 2026-01-08 00:17:44.028028 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-08 00:17:44.995964 | orchestrator | ok: [testbed-manager] 2026-01-08 00:17:44.996055 | orchestrator | 2026-01-08 00:17:44.996086 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-08 00:17:45.046970 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:17:45.047056 | orchestrator | 2026-01-08 00:17:45.047068 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-08 00:17:47.391359 | orchestrator | changed: [testbed-manager] 2026-01-08 00:17:47.391459 | orchestrator | 2026-01-08 00:17:47.391475 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-08 00:17:47.464786 | orchestrator | ok: [testbed-manager] 2026-01-08 00:17:47.464872 | orchestrator | 2026-01-08 00:17:47.464887 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-08 00:17:47.464899 | orchestrator | 2026-01-08 00:17:47.464911 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-08 00:17:47.513762 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:17:47.513836 | orchestrator | 2026-01-08 00:17:47.513845 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-08 00:18:47.563450 | orchestrator | Pausing for 60 seconds 2026-01-08 00:18:47.563632 | orchestrator | changed: [testbed-manager] 2026-01-08 00:18:47.563661 | orchestrator | 2026-01-08 00:18:47.563683 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-08 00:18:50.605049 | orchestrator | changed: [testbed-manager] 2026-01-08 00:18:50.605154 | orchestrator | 2026-01-08 00:18:50.605172 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-08 00:19:53.648638 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-08 00:19:53.648776 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-08 00:19:53.648805 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-08 00:19:53.648826 | orchestrator | changed: [testbed-manager] 2026-01-08 00:19:53.648845 | orchestrator | 2026-01-08 00:19:53.648857 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-08 00:20:04.870803 | orchestrator | changed: [testbed-manager] 2026-01-08 00:20:04.870929 | orchestrator | 2026-01-08 00:20:04.870955 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-08 00:20:04.965375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-08 00:20:04.965507 | orchestrator | 2026-01-08 00:20:04.965525 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-08 00:20:04.965537 | orchestrator | 2026-01-08 00:20:04.965549 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-08 00:20:05.023958 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:20:05.024060 | orchestrator | 2026-01-08 00:20:05.024077 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-08 00:20:05.120928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-08 00:20:05.120999 | orchestrator | 2026-01-08 00:20:05.121005 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-08 00:20:05.909280 | orchestrator | changed: [testbed-manager] 2026-01-08 00:20:05.909380 | orchestrator | 2026-01-08 00:20:05.909399 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-08 00:20:09.224720 | orchestrator | ok: [testbed-manager] 2026-01-08 00:20:09.224849 | orchestrator | 2026-01-08 00:20:09.224868 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-08 00:20:09.292110 | orchestrator | ok: [testbed-manager] => { 2026-01-08 00:20:09.292206 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-08 00:20:09.292223 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-08 00:20:09.292235 | orchestrator | "Checking running containers against expected versions...", 2026-01-08 00:20:09.292247 | orchestrator | "", 2026-01-08 00:20:09.292259 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-08 00:20:09.292271 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-08 00:20:09.292282 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.292293 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-01-08 00:20:09.292304 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.292315 | orchestrator | "", 2026-01-08 00:20:09.292326 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-08 00:20:09.292338 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-01-08 00:20:09.292349 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.292360 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-01-08 00:20:09.292370 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.292381 | orchestrator | "", 2026-01-08 00:20:09.292392 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-08 00:20:09.292403 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-08 00:20:09.292414 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.292426 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-01-08 00:20:09.292437 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.292448 | orchestrator | "", 2026-01-08 00:20:09.292459 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-08 00:20:09.292496 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-08 00:20:09.292508 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.292518 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-01-08 00:20:09.292529 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.292540 | orchestrator | "", 2026-01-08 00:20:09.292550 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-08 00:20:09.292588 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-08 00:20:09.292599 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.292610 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2025.1", 2026-01-08 00:20:09.292621 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.292632 | orchestrator | "", 2026-01-08 00:20:09.292644 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-08 00:20:09.292657 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.292669 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.292682 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.292694 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.292706 | orchestrator | "", 2026-01-08 00:20:09.292718 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-08 00:20:09.292732 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-08 00:20:09.292745 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.292768 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-08 00:20:09.292781 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.292793 | orchestrator | "", 2026-01-08 00:20:09.292806 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-08 00:20:09.292818 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-08 00:20:09.292835 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.292848 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-08 00:20:09.292861 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.292873 | orchestrator | "", 2026-01-08 00:20:09.292886 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-08 00:20:09.292899 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-01-08 00:20:09.292912 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.292925 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-01-08 00:20:09.292938 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.292950 | orchestrator | "", 2026-01-08 00:20:09.292963 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-08 00:20:09.292976 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-08 00:20:09.292991 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.293011 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-08 00:20:09.293030 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.293050 | orchestrator | "", 2026-01-08 00:20:09.293070 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-08 00:20:09.293090 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293110 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.293126 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293138 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.293149 | orchestrator | "", 2026-01-08 00:20:09.293160 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-08 00:20:09.293171 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293183 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.293194 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293205 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.293216 | orchestrator | "", 2026-01-08 00:20:09.293227 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-08 00:20:09.293238 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293260 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.293271 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293282 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.293293 | orchestrator | "", 2026-01-08 00:20:09.293304 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-08 00:20:09.293315 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293326 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.293337 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293348 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.293359 | orchestrator | "", 2026-01-08 00:20:09.293370 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-08 00:20:09.293399 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293411 | orchestrator | " Enabled: true", 2026-01-08 00:20:09.293422 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-01-08 00:20:09.293433 | orchestrator | " Status: ✅ MATCH", 2026-01-08 00:20:09.293443 | orchestrator | "", 2026-01-08 00:20:09.293454 | orchestrator | "=== Summary ===", 2026-01-08 00:20:09.293465 | orchestrator | "Errors (version mismatches): 0", 2026-01-08 00:20:09.293476 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-08 00:20:09.293487 | orchestrator | "", 2026-01-08 00:20:09.293497 | orchestrator | "✅ All running containers match expected versions!" 2026-01-08 00:20:09.293508 | orchestrator | ] 2026-01-08 00:20:09.293520 | orchestrator | } 2026-01-08 00:20:09.293531 | orchestrator | 2026-01-08 00:20:09.293543 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-08 00:20:09.340297 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:20:09.340410 | orchestrator | 2026-01-08 00:20:09.340431 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:20:09.340450 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-08 00:20:09.340468 | orchestrator | 2026-01-08 00:20:09.457169 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-08 00:20:09.457260 | orchestrator | + deactivate 2026-01-08 00:20:09.457275 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-08 00:20:09.457289 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-08 00:20:09.457300 | orchestrator | + export PATH 2026-01-08 00:20:09.457311 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-08 00:20:09.457322 | orchestrator | + '[' -n '' ']' 2026-01-08 00:20:09.457333 | orchestrator | + hash -r 2026-01-08 00:20:09.457344 | orchestrator | + '[' -n '' ']' 2026-01-08 00:20:09.457355 | orchestrator | + unset VIRTUAL_ENV 2026-01-08 00:20:09.457366 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-08 00:20:09.457377 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-08 00:20:09.457388 | orchestrator | + unset -f deactivate 2026-01-08 00:20:09.457399 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-08 00:20:09.463873 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-08 00:20:09.463946 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-08 00:20:09.463960 | orchestrator | + local max_attempts=60 2026-01-08 00:20:09.463971 | orchestrator | + local name=ceph-ansible 2026-01-08 00:20:09.463981 | orchestrator | + local attempt_num=1 2026-01-08 00:20:09.464499 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:20:09.494583 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:20:09.494655 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-08 00:20:09.494671 | orchestrator | + local max_attempts=60 2026-01-08 00:20:09.494683 | orchestrator | + local name=kolla-ansible 2026-01-08 00:20:09.494694 | orchestrator | + local attempt_num=1 2026-01-08 00:20:09.494706 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-08 00:20:09.526304 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:20:09.526374 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-08 00:20:09.526386 | orchestrator | + local max_attempts=60 2026-01-08 00:20:09.526397 | orchestrator | + local name=osism-ansible 2026-01-08 00:20:09.526409 | orchestrator | + local attempt_num=1 2026-01-08 00:20:09.527056 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-08 00:20:09.567995 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:20:09.568078 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-08 00:20:09.568092 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-08 00:20:10.346102 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-08 00:20:10.535951 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-08 00:20:10.536134 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-08 00:20:10.536406 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-08 00:20:10.536427 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-08 00:20:10.536440 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-08 00:20:10.536452 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-08 00:20:10.536483 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-08 00:20:10.536496 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-08 00:20:10.536507 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-08 00:20:10.536518 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-08 00:20:10.536529 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-08 00:20:10.536540 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-08 00:20:10.536552 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-08 00:20:10.536600 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-08 00:20:10.536621 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-08 00:20:10.536641 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-08 00:20:10.542761 | orchestrator | ++ semver latest 7.0.0 2026-01-08 00:20:10.588069 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-08 00:20:10.588154 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-08 00:20:10.588196 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-08 00:20:10.590359 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-08 00:20:22.994351 | orchestrator | 2026-01-08 00:20:22 | INFO  | Task c5faf080-de5a-4d40-a373-76f85c8e8fcf (resolvconf) was prepared for execution. 2026-01-08 00:20:22.994462 | orchestrator | 2026-01-08 00:20:22 | INFO  | It takes a moment until task c5faf080-de5a-4d40-a373-76f85c8e8fcf (resolvconf) has been started and output is visible here. 2026-01-08 00:20:37.441225 | orchestrator | 2026-01-08 00:20:37.441327 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-08 00:20:37.441342 | orchestrator | 2026-01-08 00:20:37.441354 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 00:20:37.441364 | orchestrator | Thursday 08 January 2026 00:20:27 +0000 (0:00:00.145) 0:00:00.145 ****** 2026-01-08 00:20:37.441374 | orchestrator | ok: [testbed-manager] 2026-01-08 00:20:37.441385 | orchestrator | 2026-01-08 00:20:37.441395 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-08 00:20:37.441406 | orchestrator | Thursday 08 January 2026 00:20:31 +0000 (0:00:03.914) 0:00:04.060 ****** 2026-01-08 00:20:37.441416 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:20:37.441426 | orchestrator | 2026-01-08 00:20:37.441436 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-08 00:20:37.441445 | orchestrator | Thursday 08 January 2026 00:20:31 +0000 (0:00:00.077) 0:00:04.137 ****** 2026-01-08 00:20:37.441455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-08 00:20:37.441466 | orchestrator | 2026-01-08 00:20:37.441486 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-08 00:20:37.441496 | orchestrator | Thursday 08 January 2026 00:20:31 +0000 (0:00:00.077) 0:00:04.214 ****** 2026-01-08 00:20:37.441506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-08 00:20:37.441516 | orchestrator | 2026-01-08 00:20:37.441526 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-08 00:20:37.441536 | orchestrator | Thursday 08 January 2026 00:20:31 +0000 (0:00:00.078) 0:00:04.293 ****** 2026-01-08 00:20:37.441546 | orchestrator | ok: [testbed-manager] 2026-01-08 00:20:37.441556 | orchestrator | 2026-01-08 00:20:37.441600 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-08 00:20:37.441610 | orchestrator | Thursday 08 January 2026 00:20:32 +0000 (0:00:01.140) 0:00:05.433 ****** 2026-01-08 00:20:37.441620 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:20:37.441630 | orchestrator | 2026-01-08 00:20:37.441639 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-08 00:20:37.441649 | orchestrator | Thursday 08 January 2026 00:20:32 +0000 (0:00:00.061) 0:00:05.495 ****** 2026-01-08 00:20:37.441658 | orchestrator | ok: [testbed-manager] 2026-01-08 00:20:37.441668 | orchestrator | 2026-01-08 00:20:37.441677 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-08 00:20:37.441687 | orchestrator | Thursday 08 January 2026 00:20:33 +0000 (0:00:00.539) 0:00:06.034 ****** 2026-01-08 00:20:37.441697 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:20:37.441706 | orchestrator | 2026-01-08 00:20:37.441716 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-08 00:20:37.441727 | orchestrator | Thursday 08 January 2026 00:20:33 +0000 (0:00:00.086) 0:00:06.121 ****** 2026-01-08 00:20:37.441736 | orchestrator | changed: [testbed-manager] 2026-01-08 00:20:37.441746 | orchestrator | 2026-01-08 00:20:37.441756 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-08 00:20:37.441768 | orchestrator | Thursday 08 January 2026 00:20:33 +0000 (0:00:00.565) 0:00:06.686 ****** 2026-01-08 00:20:37.441780 | orchestrator | changed: [testbed-manager] 2026-01-08 00:20:37.441852 | orchestrator | 2026-01-08 00:20:37.441864 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-08 00:20:37.441875 | orchestrator | Thursday 08 January 2026 00:20:34 +0000 (0:00:01.124) 0:00:07.810 ****** 2026-01-08 00:20:37.441886 | orchestrator | ok: [testbed-manager] 2026-01-08 00:20:37.441897 | orchestrator | 2026-01-08 00:20:37.441908 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-08 00:20:37.441919 | orchestrator | Thursday 08 January 2026 00:20:35 +0000 (0:00:01.019) 0:00:08.830 ****** 2026-01-08 00:20:37.441931 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-08 00:20:37.441942 | orchestrator | 2026-01-08 00:20:37.441953 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-08 00:20:37.441965 | orchestrator | Thursday 08 January 2026 00:20:36 +0000 (0:00:00.090) 0:00:08.921 ****** 2026-01-08 00:20:37.441976 | orchestrator | changed: [testbed-manager] 2026-01-08 00:20:37.441987 | orchestrator | 2026-01-08 00:20:37.441998 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:20:37.442010 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-08 00:20:37.442085 | orchestrator | 2026-01-08 00:20:37.442097 | orchestrator | 2026-01-08 00:20:37.442108 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:20:37.442118 | orchestrator | Thursday 08 January 2026 00:20:37 +0000 (0:00:01.162) 0:00:10.083 ****** 2026-01-08 00:20:37.442128 | orchestrator | =============================================================================== 2026-01-08 00:20:37.442137 | orchestrator | Gathering Facts --------------------------------------------------------- 3.91s 2026-01-08 00:20:37.442147 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2026-01-08 00:20:37.442156 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.14s 2026-01-08 00:20:37.442166 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.12s 2026-01-08 00:20:37.442175 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2026-01-08 00:20:37.442185 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-01-08 00:20:37.442213 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2026-01-08 00:20:37.442223 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-01-08 00:20:37.442233 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-01-08 00:20:37.442242 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-01-08 00:20:37.442259 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-01-08 00:20:37.442269 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-08 00:20:37.442278 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-01-08 00:20:37.765194 | orchestrator | + osism apply sshconfig 2026-01-08 00:20:49.889724 | orchestrator | 2026-01-08 00:20:49 | INFO  | Task ce63fc30-9805-4f8e-b1cb-6e84ebc91e30 (sshconfig) was prepared for execution. 2026-01-08 00:20:49.889829 | orchestrator | 2026-01-08 00:20:49 | INFO  | It takes a moment until task ce63fc30-9805-4f8e-b1cb-6e84ebc91e30 (sshconfig) has been started and output is visible here. 2026-01-08 00:21:02.029507 | orchestrator | 2026-01-08 00:21:02.029673 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-08 00:21:02.029692 | orchestrator | 2026-01-08 00:21:02.029705 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-08 00:21:02.029717 | orchestrator | Thursday 08 January 2026 00:20:54 +0000 (0:00:00.162) 0:00:00.162 ****** 2026-01-08 00:21:02.029757 | orchestrator | ok: [testbed-manager] 2026-01-08 00:21:02.029770 | orchestrator | 2026-01-08 00:21:02.029781 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-08 00:21:02.029792 | orchestrator | Thursday 08 January 2026 00:20:54 +0000 (0:00:00.566) 0:00:00.729 ****** 2026-01-08 00:21:02.029803 | orchestrator | changed: [testbed-manager] 2026-01-08 00:21:02.029814 | orchestrator | 2026-01-08 00:21:02.029825 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-08 00:21:02.029836 | orchestrator | Thursday 08 January 2026 00:20:55 +0000 (0:00:00.540) 0:00:01.269 ****** 2026-01-08 00:21:02.029847 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-08 00:21:02.029858 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-08 00:21:02.029870 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-08 00:21:02.029880 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-08 00:21:02.029891 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-08 00:21:02.029902 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-08 00:21:02.029913 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-08 00:21:02.029924 | orchestrator | 2026-01-08 00:21:02.029935 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-08 00:21:02.029946 | orchestrator | Thursday 08 January 2026 00:21:01 +0000 (0:00:05.873) 0:00:07.143 ****** 2026-01-08 00:21:02.029956 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:21:02.029967 | orchestrator | 2026-01-08 00:21:02.029979 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-08 00:21:02.029991 | orchestrator | Thursday 08 January 2026 00:21:01 +0000 (0:00:00.077) 0:00:07.220 ****** 2026-01-08 00:21:02.030001 | orchestrator | changed: [testbed-manager] 2026-01-08 00:21:02.030012 | orchestrator | 2026-01-08 00:21:02.030094 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:21:02.030109 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:21:02.030124 | orchestrator | 2026-01-08 00:21:02.030137 | orchestrator | 2026-01-08 00:21:02.030149 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:21:02.030162 | orchestrator | Thursday 08 January 2026 00:21:01 +0000 (0:00:00.541) 0:00:07.762 ****** 2026-01-08 00:21:02.030175 | orchestrator | =============================================================================== 2026-01-08 00:21:02.030187 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.87s 2026-01-08 00:21:02.030200 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2026-01-08 00:21:02.030213 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-01-08 00:21:02.030226 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-01-08 00:21:02.030239 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-01-08 00:21:02.356310 | orchestrator | + osism apply known-hosts 2026-01-08 00:21:14.451661 | orchestrator | 2026-01-08 00:21:14 | INFO  | Task 894a3962-b03a-4678-a55d-1022ad2964ff (known-hosts) was prepared for execution. 2026-01-08 00:21:14.451769 | orchestrator | 2026-01-08 00:21:14 | INFO  | It takes a moment until task 894a3962-b03a-4678-a55d-1022ad2964ff (known-hosts) has been started and output is visible here. 2026-01-08 00:21:31.573266 | orchestrator | 2026-01-08 00:21:31.573371 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-08 00:21:31.573388 | orchestrator | 2026-01-08 00:21:31.573400 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-08 00:21:31.573413 | orchestrator | Thursday 08 January 2026 00:21:18 +0000 (0:00:00.175) 0:00:00.175 ****** 2026-01-08 00:21:31.573425 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-08 00:21:31.573459 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-08 00:21:31.573471 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-08 00:21:31.573482 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-08 00:21:31.573492 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-08 00:21:31.573504 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-08 00:21:31.573515 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-08 00:21:31.573525 | orchestrator | 2026-01-08 00:21:31.573538 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-08 00:21:31.573591 | orchestrator | Thursday 08 January 2026 00:21:24 +0000 (0:00:05.980) 0:00:06.155 ****** 2026-01-08 00:21:31.573606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-08 00:21:31.573619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-08 00:21:31.573631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-08 00:21:31.573642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-08 00:21:31.573653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-08 00:21:31.573664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-08 00:21:31.573674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-08 00:21:31.573685 | orchestrator | 2026-01-08 00:21:31.573697 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:31.573708 | orchestrator | Thursday 08 January 2026 00:21:24 +0000 (0:00:00.182) 0:00:06.338 ****** 2026-01-08 00:21:31.573719 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIjgYcmCAzWwgmrO16omKAvxMWABDpBXBpGSFnBqQp6L) 2026-01-08 00:21:31.573735 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKi4xTJt+QcP20JZn2Kv2tRBsUwtmmk8QYamMGd0slHdF/Jj5BeCckHKtrIBqRF7ETBiE6vBh/iO7M2zL2+PvKyMSxEaBCq30r/UpoB/+659ktSRnmx60viQzVowyu8S2eZVHVB9VipFM9Dfd6IJlnI8VVmnA9TNHr8muOEwTKzgTBgrrb6gVrWZDXzY0I3kfYSzxho7CCHMp7h1f2wvPFI28912//c7cirs5oTy8YN5Vju/vHe05lBLQRL3/keOMfe0Lw6YmEq8YawDLDgfj+RaH9t1bSNkcbieObEYAbLISbOqdc7mfiAta0BJMe21beGdL2ENB/TcvHOLZgx+u5UVrgnsIB7rhEiiRdSkKbdcVuaR+4NDhFAff5vosPdfazrGJ0IGM03nGrzNxs/Pix6WjR5ryOz34HyHRirepFXuB2hyF5bG78GpETFLputJn98a22R/oEM4vhQAAKFTtUE1atFL+g/BWhm+SBYJiHsiadW/ojdMm6cf6QVNnqMDk=) 2026-01-08 00:21:31.573749 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAuHzlHZu3U/L9BUULulF3Qy0iEc4K3Am6ww/7OdAa6nuxdIJ03KZBYEZ0W85bX/CiCWmQjDZL5DhAWDB+qifLQ=) 2026-01-08 00:21:31.573762 | orchestrator | 2026-01-08 00:21:31.573776 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:31.573789 | orchestrator | Thursday 08 January 2026 00:21:26 +0000 (0:00:01.257) 0:00:07.595 ****** 2026-01-08 00:21:31.573803 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNbamvobbuxBncgVniIhx7Ah/PxnIcLxt7OlSt5/F9dDktkSKFDmyiLT3VowTOc+KQdByGGeJ7t6DYLSPHbpLVA=) 2026-01-08 00:21:31.573850 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZazjye7pv16gv7Bw1pFu1HaNqrAtw9qyTKh6ISU4fKkik6+jufqPV8/HCsPu+uKXKpV3KCr0bsSNX1y1TrS3i1ibc/aLd6RYpBVlia4kc10dfH1cQ/Sz8dfDY/I5AtZVURVeiQ5xCu7M5yOHdGQx9gRBwAdDW/HLR0rjsQFC6g2kBxzIbHV69amo1JBblvnvIDZKC9DFXXm/6/dQAEd1rDi4l9vmc5UzhvU9OGo/lbRzprIyd3mAPIknq2VHGmKyDrGMSvdrGc/uPoXzeZCCFXMU99G+btfoj4wTjb3Tpv0osF6AVTBcaEvimKG70K0gXQnRjKoxCZTmwqRW1KYGMQ5WF9Zi5lIgoay2LqdbL4I+gIpbHiM7it0CRc4CnfotgyWsRYkmhsBNXRhSJDWPt8rV8Y02zBjlXJLFU4veJkKxVXCOPHLksn2qCpTG/RcemNXWtUMS+WT7+DKwlejB4nFm9QkX+QVW5tblAaUIfsLqDUY1Np4+1FEr8BG0p/ZU=) 2026-01-08 00:21:31.573865 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC89BiZNJ+d45KcgDU1eHsmvPAkEj7Xm3ABF7+g7dX3g) 2026-01-08 00:21:31.573879 | orchestrator | 2026-01-08 00:21:31.573891 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:31.573904 | orchestrator | Thursday 08 January 2026 00:21:27 +0000 (0:00:01.092) 0:00:08.688 ****** 2026-01-08 00:21:31.573918 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGQMZ1HxI9U4oqE/zdXhaeVsNgb++n8MX2wOZKZNETbJZi+IM4QlnqyIBi+ZtSozN4Sx7++f1gs1FuAq2e1XQkY=) 2026-01-08 00:21:31.574004 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQvkUZuqhU4850hn/660T818e0bbmWff/G/9ISleNyPnaRQ3mmz3QK/fst8q9yx1iwVlolKSQFgiGc4Bgc9fnbEXzwga1zIYRU0mc4qo8gE/ts+b9YZjB7MNGRVOOOF1EsdqlSUdlHjJILei27Z8z2foYKWxeRj+NI3HWMBBEmQ9lP4Rite0MY6NP3uQn44ceGDkeM69/c30hpeg7TlVx+vHBAIR+FWeJi2cjWBlg77TG8C+HJoL7tWIhvhgjLRd+LaxbOpYkC5SfXNUhdAuyjR36HeqAQ/A+Sml+wvWxY+TjiL3cSdWbwtGNy7p71AOLSNRxGC7eNhAfCsAJNc7Edh8lfLkTEeXfcF4QarLmbI7lSJgyFZA9opIOwPDoJZscPFYF/qGHykUh6DMUT8NyTjcp3XEIMI5POJlrPtYDh9cyMyBq0IEm9L7UJVbfiuwfQBj0lLmY6ZriYfJsD3e/zEs99NVJmq7xbEvaNVoQWEERBc2ZuEi6H2/HNc7Te+vM=) 2026-01-08 00:21:31.574074 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINNui+H3cwxesINXcufvqv0djLkjO9zVM18mOg2ixziH) 2026-01-08 00:21:31.574089 | orchestrator | 2026-01-08 00:21:31.574103 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:31.574115 | orchestrator | Thursday 08 January 2026 00:21:28 +0000 (0:00:01.038) 0:00:09.726 ****** 2026-01-08 00:21:31.574134 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjc3KsTZ3A/tp/p36Q3pxnbyXYm5dz1F5bvEn2WnlPldPVyE87IwzCzADgUEn4X0cNzdiYKaDAJ5+exPiBXYsJb/2kGVx7c4fg4QMjWlEoG7IDYamq5u1rOfyUOSooHatSe+Vm4q6Q+uupAPgfC/xcuk5eFfeDkJPEYpn++a+Rm0B0ESmWZ7MIZbP2UeH6Nf6Wqc/TqvLVpVHuzLrYtWyqdrCE+d9cEGcEjNOLBk9nKVr4r2aSiAkzlpaUJaXzY4itiWdJyYNx0Zdwlr1B7tVwt96g7PlYtJseyxWtwAOFNJ7VfFLcfJwHfi2w63x1V2U6dLdSK0vk1R2aAvmnVoU3FOWHAjoR78OInBGZrxSP9rjZENcufJ9N/SMpFMwSrUJE9qY0fW4CL++FMMIuX4CnVgcF6x6khZ66/MqjfuaEN5HJ9yUXdanXVZZWcH0oExYj7MC8vri7wrymm6Jyu+svoUGkIfaVKCFt5vYFAIVK1p8NCv4PAzTDtXsgzLqJGeE=) 2026-01-08 00:21:31.574148 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMJppp19WWZeaIk66tnN8nBwMtACimMLltqXvz4e9hHPkJ2J7rXr9EMJHHWHF6h1Sg46IxBOHfAlmUBszDdKLcw=) 2026-01-08 00:21:31.574162 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH3FCKwsIFK8dKYlB4rNR77wMJ2zxRwuY0r0hvJul94a) 2026-01-08 00:21:31.574173 | orchestrator | 2026-01-08 00:21:31.574184 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:31.574195 | orchestrator | Thursday 08 January 2026 00:21:29 +0000 (0:00:01.069) 0:00:10.796 ****** 2026-01-08 00:21:31.574206 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN7lfvTMHhENcCakYcIfDQDb1RSVjKf0JK72i/iLE543d1TX+FpV5imzftI7o/cIUw6uCF/KV8qmJSW2+90TFQ4=) 2026-01-08 00:21:31.574217 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE0UcJTvo12Q0VvJ4E8iUmQsxov+LbiqefXyXULLt2fML378wx7A21wgHI/C1j7OsKzKImUuiVbDf9L42AlK0/HsQsHUWBiiRQLTJe2u5kvjYdCV1rwFD9V//bE7bnnHlWN5EXSEEj41wy1MS/Dxn1YWutrwrpMuIwUYbeYJjFehBDQvz+oHBdRLNSqS80VfQurnGnxZP/IoJ6obAn5GfMj6RSQR2Mwm8HPSNX3mVhjy1+ob9+KjchkFTujpkQGFijkQhaVG3QsvK4OIQDdlDaOgMHvwCJvvAkgSO9G44oVE90M8jEqBWnk7vGTYdKPqYIwbX/O8FhrLYZDIGRO0yEheesLFXtkNkt6DlyZ/G2k8LPwMzto4IAPLvusEt+7Xfzl+JUCdb+em6v16v1HjKEELykD7csIRlGtYarvLv78XAcbG7N40hTGtUTBoRdkeOVr/eknG/wvjFGChOa1kB8K8CLkIjBdTXxhNsxS2Kq1gE/Cvx2ZanzmixifHEICzU=) 2026-01-08 00:21:31.574236 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKXxyys4o8MwNZ8grEXJDGUDuS9X75q/dwhLIfo1yiA0) 2026-01-08 00:21:31.574247 | orchestrator | 2026-01-08 00:21:31.574258 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:31.574269 | orchestrator | Thursday 08 January 2026 00:21:30 +0000 (0:00:01.099) 0:00:11.895 ****** 2026-01-08 00:21:31.574288 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAIYb++DiTkxKhU+cWEuRY0sgcAPWuFd+Pge2XUrXDyt) 2026-01-08 00:21:42.442123 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCt1ZSys/ZgUJzfobWAjKUkQjbfBxgdCJCNCQUS9Nfxp436Lbc65aZGoXBMABB49wsqn4SF1XWIEISVfbYWUxyjzxGXVak4RenZGUHeHZNC2J7cTm4ZmZgpc1gSkL3NfbIaEbgaukx9NV+Lxf/K+Ztd4odOwnDZ2tl7h6d2pstdGS52uQ3UaMZ6xE6tCaB3DLYTKt6d+vkMQAIHK1B9i/h307YIlnYg0fLBskVVsyPwV8BCNvEZXWys6qY20IJPTRFQhvK2xVo/e40Mj3vRMP5wimixqrGuMvMcch2IMEhBNdTExxG9eUu3V+DOb9slXqt0SjQU2MGiXoDsZXnjjBNk1kcOddXaFq1kaxU8mRJ4qWTX6ooe5kJVJQQp51DKrmv1U0gtw1PTrcEc7k7tXkGh+KhCYt0lGpDLSi+RwvbEPnCpe9uV5NXMWi4g9FoIcF/8ss4Ovpf8A7IBUk6iOeaTryNU1W0sulVVwk15WeWz0bsylGQIDE6napps7RF6pU=) 2026-01-08 00:21:42.442252 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDhJ5/z/dhk1AJI9jCdZiQ84Gcp4HhHZ1Qelrr8qkdTAzbCkWNQEA2D1bOw89fporvc0ZJeVkewuNAIHYNABnWI=) 2026-01-08 00:21:42.443041 | orchestrator | 2026-01-08 00:21:42.443064 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:42.443077 | orchestrator | Thursday 08 January 2026 00:21:31 +0000 (0:00:01.137) 0:00:13.033 ****** 2026-01-08 00:21:42.443088 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIrpHd24dghs53saNdckOn+dt6Q+RpcHHmOkdY+1q0WU) 2026-01-08 00:21:42.443102 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAnqS/eBe9PCeKv6z14dKk6n2yYtq7dtY5jABevtHqapQIo8CrSmCY9cPjl7pwvLFnedh5+Zpg2usTtSl8mel+yggTDTqT5BFtVTAKkCJNpGIywd4WYuIulUNrO86b6Srlm5jHtRxgo0FwIHB+B4jEmBLhgI78anaMGW1X4/qlLedfpletACnH4gXaWzIyjnZWb8TtVloMfCbTciHxvVdVD7Z7NrrcV2NJkCkb2aQcGo9clZs5YxqZ9jtkD7jILW8qO8l6ottBrUSWJxP+LI0AHNkZLyFPzX7MKvWxMNb8YytoZ4Heo9fa5ybNtOeAwjzXUkgvNG2JMDLpNBVFNfalRZD7s6Tg5scUC8ELGlrQF9ne1rGtv9bBnwetP4zGgO5dw9up6R6n2qNejYMpgQXmFuFdGVQMZnCTflf8fAOYkg98uHXUDdw42zs9FCy1sFWwydTkH52kdsFgGGljG3IJ8Uod7nglRqWCXHtB+gZkZPNZ/0uP88XAP4V6bBKwki8=) 2026-01-08 00:21:42.443114 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBwfJlk4Lm12mhT6wbBJcpdNd20mF28edScOnynbMynEmTP5TO99EhIvnPwuFD5El89a2mt0e8O05cvKkntYh6M=) 2026-01-08 00:21:42.443126 | orchestrator | 2026-01-08 00:21:42.443137 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-08 00:21:42.443149 | orchestrator | Thursday 08 January 2026 00:21:32 +0000 (0:00:01.090) 0:00:14.123 ****** 2026-01-08 00:21:42.443161 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-08 00:21:42.443173 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-08 00:21:42.443184 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-08 00:21:42.443213 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-08 00:21:42.443225 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-08 00:21:42.443260 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-08 00:21:42.443272 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-08 00:21:42.443283 | orchestrator | 2026-01-08 00:21:42.443294 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-08 00:21:42.443307 | orchestrator | Thursday 08 January 2026 00:21:37 +0000 (0:00:05.330) 0:00:19.454 ****** 2026-01-08 00:21:42.443318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-08 00:21:42.443331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-08 00:21:42.443342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-08 00:21:42.443353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-08 00:21:42.443364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-08 00:21:42.443375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-08 00:21:42.443386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-08 00:21:42.443397 | orchestrator | 2026-01-08 00:21:42.443425 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:42.443437 | orchestrator | Thursday 08 January 2026 00:21:38 +0000 (0:00:00.179) 0:00:19.634 ****** 2026-01-08 00:21:42.443448 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAuHzlHZu3U/L9BUULulF3Qy0iEc4K3Am6ww/7OdAa6nuxdIJ03KZBYEZ0W85bX/CiCWmQjDZL5DhAWDB+qifLQ=) 2026-01-08 00:21:42.443462 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKi4xTJt+QcP20JZn2Kv2tRBsUwtmmk8QYamMGd0slHdF/Jj5BeCckHKtrIBqRF7ETBiE6vBh/iO7M2zL2+PvKyMSxEaBCq30r/UpoB/+659ktSRnmx60viQzVowyu8S2eZVHVB9VipFM9Dfd6IJlnI8VVmnA9TNHr8muOEwTKzgTBgrrb6gVrWZDXzY0I3kfYSzxho7CCHMp7h1f2wvPFI28912//c7cirs5oTy8YN5Vju/vHe05lBLQRL3/keOMfe0Lw6YmEq8YawDLDgfj+RaH9t1bSNkcbieObEYAbLISbOqdc7mfiAta0BJMe21beGdL2ENB/TcvHOLZgx+u5UVrgnsIB7rhEiiRdSkKbdcVuaR+4NDhFAff5vosPdfazrGJ0IGM03nGrzNxs/Pix6WjR5ryOz34HyHRirepFXuB2hyF5bG78GpETFLputJn98a22R/oEM4vhQAAKFTtUE1atFL+g/BWhm+SBYJiHsiadW/ojdMm6cf6QVNnqMDk=) 2026-01-08 00:21:42.443474 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIjgYcmCAzWwgmrO16omKAvxMWABDpBXBpGSFnBqQp6L) 2026-01-08 00:21:42.443484 | orchestrator | 2026-01-08 00:21:42.443495 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:42.443506 | orchestrator | Thursday 08 January 2026 00:21:39 +0000 (0:00:01.066) 0:00:20.700 ****** 2026-01-08 00:21:42.443518 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZazjye7pv16gv7Bw1pFu1HaNqrAtw9qyTKh6ISU4fKkik6+jufqPV8/HCsPu+uKXKpV3KCr0bsSNX1y1TrS3i1ibc/aLd6RYpBVlia4kc10dfH1cQ/Sz8dfDY/I5AtZVURVeiQ5xCu7M5yOHdGQx9gRBwAdDW/HLR0rjsQFC6g2kBxzIbHV69amo1JBblvnvIDZKC9DFXXm/6/dQAEd1rDi4l9vmc5UzhvU9OGo/lbRzprIyd3mAPIknq2VHGmKyDrGMSvdrGc/uPoXzeZCCFXMU99G+btfoj4wTjb3Tpv0osF6AVTBcaEvimKG70K0gXQnRjKoxCZTmwqRW1KYGMQ5WF9Zi5lIgoay2LqdbL4I+gIpbHiM7it0CRc4CnfotgyWsRYkmhsBNXRhSJDWPt8rV8Y02zBjlXJLFU4veJkKxVXCOPHLksn2qCpTG/RcemNXWtUMS+WT7+DKwlejB4nFm9QkX+QVW5tblAaUIfsLqDUY1Np4+1FEr8BG0p/ZU=) 2026-01-08 00:21:42.443537 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNbamvobbuxBncgVniIhx7Ah/PxnIcLxt7OlSt5/F9dDktkSKFDmyiLT3VowTOc+KQdByGGeJ7t6DYLSPHbpLVA=) 2026-01-08 00:21:42.443549 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC89BiZNJ+d45KcgDU1eHsmvPAkEj7Xm3ABF7+g7dX3g) 2026-01-08 00:21:42.443592 | orchestrator | 2026-01-08 00:21:42.443607 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:42.443618 | orchestrator | Thursday 08 January 2026 00:21:40 +0000 (0:00:01.059) 0:00:21.759 ****** 2026-01-08 00:21:42.443629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCQvkUZuqhU4850hn/660T818e0bbmWff/G/9ISleNyPnaRQ3mmz3QK/fst8q9yx1iwVlolKSQFgiGc4Bgc9fnbEXzwga1zIYRU0mc4qo8gE/ts+b9YZjB7MNGRVOOOF1EsdqlSUdlHjJILei27Z8z2foYKWxeRj+NI3HWMBBEmQ9lP4Rite0MY6NP3uQn44ceGDkeM69/c30hpeg7TlVx+vHBAIR+FWeJi2cjWBlg77TG8C+HJoL7tWIhvhgjLRd+LaxbOpYkC5SfXNUhdAuyjR36HeqAQ/A+Sml+wvWxY+TjiL3cSdWbwtGNy7p71AOLSNRxGC7eNhAfCsAJNc7Edh8lfLkTEeXfcF4QarLmbI7lSJgyFZA9opIOwPDoJZscPFYF/qGHykUh6DMUT8NyTjcp3XEIMI5POJlrPtYDh9cyMyBq0IEm9L7UJVbfiuwfQBj0lLmY6ZriYfJsD3e/zEs99NVJmq7xbEvaNVoQWEERBc2ZuEi6H2/HNc7Te+vM=) 2026-01-08 00:21:42.443641 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGQMZ1HxI9U4oqE/zdXhaeVsNgb++n8MX2wOZKZNETbJZi+IM4QlnqyIBi+ZtSozN4Sx7++f1gs1FuAq2e1XQkY=) 2026-01-08 00:21:42.443652 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINNui+H3cwxesINXcufvqv0djLkjO9zVM18mOg2ixziH) 2026-01-08 00:21:42.443662 | orchestrator | 2026-01-08 00:21:42.443673 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:42.443684 | orchestrator | Thursday 08 January 2026 00:21:41 +0000 (0:00:01.064) 0:00:22.823 ****** 2026-01-08 00:21:42.443695 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH3FCKwsIFK8dKYlB4rNR77wMJ2zxRwuY0r0hvJul94a) 2026-01-08 00:21:42.443731 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjc3KsTZ3A/tp/p36Q3pxnbyXYm5dz1F5bvEn2WnlPldPVyE87IwzCzADgUEn4X0cNzdiYKaDAJ5+exPiBXYsJb/2kGVx7c4fg4QMjWlEoG7IDYamq5u1rOfyUOSooHatSe+Vm4q6Q+uupAPgfC/xcuk5eFfeDkJPEYpn++a+Rm0B0ESmWZ7MIZbP2UeH6Nf6Wqc/TqvLVpVHuzLrYtWyqdrCE+d9cEGcEjNOLBk9nKVr4r2aSiAkzlpaUJaXzY4itiWdJyYNx0Zdwlr1B7tVwt96g7PlYtJseyxWtwAOFNJ7VfFLcfJwHfi2w63x1V2U6dLdSK0vk1R2aAvmnVoU3FOWHAjoR78OInBGZrxSP9rjZENcufJ9N/SMpFMwSrUJE9qY0fW4CL++FMMIuX4CnVgcF6x6khZ66/MqjfuaEN5HJ9yUXdanXVZZWcH0oExYj7MC8vri7wrymm6Jyu+svoUGkIfaVKCFt5vYFAIVK1p8NCv4PAzTDtXsgzLqJGeE=) 2026-01-08 00:21:47.053181 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMJppp19WWZeaIk66tnN8nBwMtACimMLltqXvz4e9hHPkJ2J7rXr9EMJHHWHF6h1Sg46IxBOHfAlmUBszDdKLcw=) 2026-01-08 00:21:47.053287 | orchestrator | 2026-01-08 00:21:47.053304 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:47.053318 | orchestrator | Thursday 08 January 2026 00:21:42 +0000 (0:00:01.076) 0:00:23.900 ****** 2026-01-08 00:21:47.053329 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN7lfvTMHhENcCakYcIfDQDb1RSVjKf0JK72i/iLE543d1TX+FpV5imzftI7o/cIUw6uCF/KV8qmJSW2+90TFQ4=) 2026-01-08 00:21:47.053343 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE0UcJTvo12Q0VvJ4E8iUmQsxov+LbiqefXyXULLt2fML378wx7A21wgHI/C1j7OsKzKImUuiVbDf9L42AlK0/HsQsHUWBiiRQLTJe2u5kvjYdCV1rwFD9V//bE7bnnHlWN5EXSEEj41wy1MS/Dxn1YWutrwrpMuIwUYbeYJjFehBDQvz+oHBdRLNSqS80VfQurnGnxZP/IoJ6obAn5GfMj6RSQR2Mwm8HPSNX3mVhjy1+ob9+KjchkFTujpkQGFijkQhaVG3QsvK4OIQDdlDaOgMHvwCJvvAkgSO9G44oVE90M8jEqBWnk7vGTYdKPqYIwbX/O8FhrLYZDIGRO0yEheesLFXtkNkt6DlyZ/G2k8LPwMzto4IAPLvusEt+7Xfzl+JUCdb+em6v16v1HjKEELykD7csIRlGtYarvLv78XAcbG7N40hTGtUTBoRdkeOVr/eknG/wvjFGChOa1kB8K8CLkIjBdTXxhNsxS2Kq1gE/Cvx2ZanzmixifHEICzU=) 2026-01-08 00:21:47.053384 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKXxyys4o8MwNZ8grEXJDGUDuS9X75q/dwhLIfo1yiA0) 2026-01-08 00:21:47.053397 | orchestrator | 2026-01-08 00:21:47.053408 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:47.053419 | orchestrator | Thursday 08 January 2026 00:21:43 +0000 (0:00:01.098) 0:00:24.998 ****** 2026-01-08 00:21:47.053445 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDhJ5/z/dhk1AJI9jCdZiQ84Gcp4HhHZ1Qelrr8qkdTAzbCkWNQEA2D1bOw89fporvc0ZJeVkewuNAIHYNABnWI=) 2026-01-08 00:21:47.053458 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCt1ZSys/ZgUJzfobWAjKUkQjbfBxgdCJCNCQUS9Nfxp436Lbc65aZGoXBMABB49wsqn4SF1XWIEISVfbYWUxyjzxGXVak4RenZGUHeHZNC2J7cTm4ZmZgpc1gSkL3NfbIaEbgaukx9NV+Lxf/K+Ztd4odOwnDZ2tl7h6d2pstdGS52uQ3UaMZ6xE6tCaB3DLYTKt6d+vkMQAIHK1B9i/h307YIlnYg0fLBskVVsyPwV8BCNvEZXWys6qY20IJPTRFQhvK2xVo/e40Mj3vRMP5wimixqrGuMvMcch2IMEhBNdTExxG9eUu3V+DOb9slXqt0SjQU2MGiXoDsZXnjjBNk1kcOddXaFq1kaxU8mRJ4qWTX6ooe5kJVJQQp51DKrmv1U0gtw1PTrcEc7k7tXkGh+KhCYt0lGpDLSi+RwvbEPnCpe9uV5NXMWi4g9FoIcF/8ss4Ovpf8A7IBUk6iOeaTryNU1W0sulVVwk15WeWz0bsylGQIDE6napps7RF6pU=) 2026-01-08 00:21:47.053469 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAIYb++DiTkxKhU+cWEuRY0sgcAPWuFd+Pge2XUrXDyt) 2026-01-08 00:21:47.053480 | orchestrator | 2026-01-08 00:21:47.053491 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-08 00:21:47.053501 | orchestrator | Thursday 08 January 2026 00:21:44 +0000 (0:00:01.149) 0:00:26.148 ****** 2026-01-08 00:21:47.053512 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAnqS/eBe9PCeKv6z14dKk6n2yYtq7dtY5jABevtHqapQIo8CrSmCY9cPjl7pwvLFnedh5+Zpg2usTtSl8mel+yggTDTqT5BFtVTAKkCJNpGIywd4WYuIulUNrO86b6Srlm5jHtRxgo0FwIHB+B4jEmBLhgI78anaMGW1X4/qlLedfpletACnH4gXaWzIyjnZWb8TtVloMfCbTciHxvVdVD7Z7NrrcV2NJkCkb2aQcGo9clZs5YxqZ9jtkD7jILW8qO8l6ottBrUSWJxP+LI0AHNkZLyFPzX7MKvWxMNb8YytoZ4Heo9fa5ybNtOeAwjzXUkgvNG2JMDLpNBVFNfalRZD7s6Tg5scUC8ELGlrQF9ne1rGtv9bBnwetP4zGgO5dw9up6R6n2qNejYMpgQXmFuFdGVQMZnCTflf8fAOYkg98uHXUDdw42zs9FCy1sFWwydTkH52kdsFgGGljG3IJ8Uod7nglRqWCXHtB+gZkZPNZ/0uP88XAP4V6bBKwki8=) 2026-01-08 00:21:47.053524 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBwfJlk4Lm12mhT6wbBJcpdNd20mF28edScOnynbMynEmTP5TO99EhIvnPwuFD5El89a2mt0e8O05cvKkntYh6M=) 2026-01-08 00:21:47.053535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIrpHd24dghs53saNdckOn+dt6Q+RpcHHmOkdY+1q0WU) 2026-01-08 00:21:47.053545 | orchestrator | 2026-01-08 00:21:47.053556 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-08 00:21:47.053675 | orchestrator | Thursday 08 January 2026 00:21:45 +0000 (0:00:01.129) 0:00:27.278 ****** 2026-01-08 00:21:47.053698 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-08 00:21:47.053717 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-08 00:21:47.053734 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-08 00:21:47.053745 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-08 00:21:47.053777 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-08 00:21:47.053788 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-08 00:21:47.053799 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-08 00:21:47.053810 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:21:47.053821 | orchestrator | 2026-01-08 00:21:47.053832 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-08 00:21:47.053854 | orchestrator | Thursday 08 January 2026 00:21:45 +0000 (0:00:00.171) 0:00:27.449 ****** 2026-01-08 00:21:47.053865 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:21:47.053880 | orchestrator | 2026-01-08 00:21:47.053897 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-08 00:21:47.053916 | orchestrator | Thursday 08 January 2026 00:21:46 +0000 (0:00:00.060) 0:00:27.510 ****** 2026-01-08 00:21:47.053933 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:21:47.053952 | orchestrator | 2026-01-08 00:21:47.053970 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-08 00:21:47.053989 | orchestrator | Thursday 08 January 2026 00:21:46 +0000 (0:00:00.065) 0:00:27.576 ****** 2026-01-08 00:21:47.054000 | orchestrator | changed: [testbed-manager] 2026-01-08 00:21:47.054011 | orchestrator | 2026-01-08 00:21:47.054096 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:21:47.054108 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-08 00:21:47.054120 | orchestrator | 2026-01-08 00:21:47.054131 | orchestrator | 2026-01-08 00:21:47.054142 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:21:47.054152 | orchestrator | Thursday 08 January 2026 00:21:46 +0000 (0:00:00.735) 0:00:28.312 ****** 2026-01-08 00:21:47.054163 | orchestrator | =============================================================================== 2026-01-08 00:21:47.054174 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.98s 2026-01-08 00:21:47.054185 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.33s 2026-01-08 00:21:47.054196 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-01-08 00:21:47.054207 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-01-08 00:21:47.054218 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-08 00:21:47.054228 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-01-08 00:21:47.054239 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-08 00:21:47.054250 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-01-08 00:21:47.054260 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-08 00:21:47.054271 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-08 00:21:47.054282 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-08 00:21:47.054292 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-08 00:21:47.054303 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-08 00:21:47.054323 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-08 00:21:47.054335 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-08 00:21:47.054346 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-08 00:21:47.054356 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.74s 2026-01-08 00:21:47.054367 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-01-08 00:21:47.054378 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-01-08 00:21:47.054389 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-01-08 00:21:47.395318 | orchestrator | + osism apply squid 2026-01-08 00:21:59.539979 | orchestrator | 2026-01-08 00:21:59 | INFO  | Task bffc629d-1e58-4cf4-b255-ef0d0a838d6c (squid) was prepared for execution. 2026-01-08 00:21:59.540090 | orchestrator | 2026-01-08 00:21:59 | INFO  | It takes a moment until task bffc629d-1e58-4cf4-b255-ef0d0a838d6c (squid) has been started and output is visible here. 2026-01-08 00:24:20.015729 | orchestrator | 2026-01-08 00:24:20.015835 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-08 00:24:20.015850 | orchestrator | 2026-01-08 00:24:20.015862 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-08 00:24:20.015874 | orchestrator | Thursday 08 January 2026 00:22:03 +0000 (0:00:00.164) 0:00:00.164 ****** 2026-01-08 00:24:20.015885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-08 00:24:20.015897 | orchestrator | 2026-01-08 00:24:20.015908 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-08 00:24:20.015919 | orchestrator | Thursday 08 January 2026 00:22:03 +0000 (0:00:00.088) 0:00:00.253 ****** 2026-01-08 00:24:20.015930 | orchestrator | ok: [testbed-manager] 2026-01-08 00:24:20.015942 | orchestrator | 2026-01-08 00:24:20.015953 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-08 00:24:20.015964 | orchestrator | Thursday 08 January 2026 00:22:05 +0000 (0:00:01.566) 0:00:01.819 ****** 2026-01-08 00:24:20.015975 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-08 00:24:20.015986 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-08 00:24:20.015997 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-08 00:24:20.016008 | orchestrator | 2026-01-08 00:24:20.016019 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-08 00:24:20.016029 | orchestrator | Thursday 08 January 2026 00:22:06 +0000 (0:00:01.207) 0:00:03.027 ****** 2026-01-08 00:24:20.016040 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-08 00:24:20.016051 | orchestrator | 2026-01-08 00:24:20.016062 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-08 00:24:20.016073 | orchestrator | Thursday 08 January 2026 00:22:07 +0000 (0:00:01.078) 0:00:04.105 ****** 2026-01-08 00:24:20.016084 | orchestrator | ok: [testbed-manager] 2026-01-08 00:24:20.016094 | orchestrator | 2026-01-08 00:24:20.016105 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-08 00:24:20.016116 | orchestrator | Thursday 08 January 2026 00:22:08 +0000 (0:00:00.381) 0:00:04.487 ****** 2026-01-08 00:24:20.016127 | orchestrator | changed: [testbed-manager] 2026-01-08 00:24:20.016138 | orchestrator | 2026-01-08 00:24:20.016149 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-08 00:24:20.016160 | orchestrator | Thursday 08 January 2026 00:22:09 +0000 (0:00:00.976) 0:00:05.463 ****** 2026-01-08 00:24:20.016171 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-08 00:24:20.016182 | orchestrator | ok: [testbed-manager] 2026-01-08 00:24:20.016193 | orchestrator | 2026-01-08 00:24:20.016204 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-08 00:24:20.016218 | orchestrator | Thursday 08 January 2026 00:23:03 +0000 (0:00:54.091) 0:00:59.555 ****** 2026-01-08 00:24:20.016230 | orchestrator | changed: [testbed-manager] 2026-01-08 00:24:20.016242 | orchestrator | 2026-01-08 00:24:20.016256 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-08 00:24:20.016268 | orchestrator | Thursday 08 January 2026 00:23:18 +0000 (0:00:15.742) 0:01:15.297 ****** 2026-01-08 00:24:20.016282 | orchestrator | Pausing for 60 seconds 2026-01-08 00:24:20.016295 | orchestrator | changed: [testbed-manager] 2026-01-08 00:24:20.016308 | orchestrator | 2026-01-08 00:24:20.016321 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-08 00:24:20.016334 | orchestrator | Thursday 08 January 2026 00:24:19 +0000 (0:01:00.085) 0:02:15.383 ****** 2026-01-08 00:24:20.016348 | orchestrator | ok: [testbed-manager] 2026-01-08 00:24:20.016360 | orchestrator | 2026-01-08 00:24:20.016373 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-08 00:24:20.016414 | orchestrator | Thursday 08 January 2026 00:24:19 +0000 (0:00:00.070) 0:02:15.453 ****** 2026-01-08 00:24:20.016427 | orchestrator | changed: [testbed-manager] 2026-01-08 00:24:20.016441 | orchestrator | 2026-01-08 00:24:20.016455 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:24:20.016468 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:24:20.016481 | orchestrator | 2026-01-08 00:24:20.016494 | orchestrator | 2026-01-08 00:24:20.016507 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:24:20.016520 | orchestrator | Thursday 08 January 2026 00:24:19 +0000 (0:00:00.655) 0:02:16.108 ****** 2026-01-08 00:24:20.016534 | orchestrator | =============================================================================== 2026-01-08 00:24:20.016546 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-01-08 00:24:20.016559 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 54.09s 2026-01-08 00:24:20.016655 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.74s 2026-01-08 00:24:20.016668 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.57s 2026-01-08 00:24:20.016679 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.21s 2026-01-08 00:24:20.016689 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2026-01-08 00:24:20.016700 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.98s 2026-01-08 00:24:20.016711 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-01-08 00:24:20.016721 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-01-08 00:24:20.016732 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-01-08 00:24:20.016743 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-01-08 00:24:20.333083 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-08 00:24:20.333178 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-01-08 00:24:20.336922 | orchestrator | + set -e 2026-01-08 00:24:20.336967 | orchestrator | + NAMESPACE=kolla 2026-01-08 00:24:20.336980 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-08 00:24:20.343625 | orchestrator | ++ semver latest 9.0.0 2026-01-08 00:24:20.403851 | orchestrator | + [[ -1 -lt 0 ]] 2026-01-08 00:24:20.403940 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-08 00:24:20.404950 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-08 00:24:32.627130 | orchestrator | 2026-01-08 00:24:32 | INFO  | Task a8ece22b-735c-438f-a8de-455c3e3bc824 (operator) was prepared for execution. 2026-01-08 00:24:32.627242 | orchestrator | 2026-01-08 00:24:32 | INFO  | It takes a moment until task a8ece22b-735c-438f-a8de-455c3e3bc824 (operator) has been started and output is visible here. 2026-01-08 00:24:48.685729 | orchestrator | 2026-01-08 00:24:48.685838 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-08 00:24:48.685857 | orchestrator | 2026-01-08 00:24:48.685871 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 00:24:48.685885 | orchestrator | Thursday 08 January 2026 00:24:36 +0000 (0:00:00.145) 0:00:00.145 ****** 2026-01-08 00:24:48.685897 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:24:48.685910 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:24:48.685921 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:24:48.685932 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:24:48.685943 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:24:48.685958 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:24:48.685969 | orchestrator | 2026-01-08 00:24:48.685980 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-08 00:24:48.685992 | orchestrator | Thursday 08 January 2026 00:24:40 +0000 (0:00:03.349) 0:00:03.494 ****** 2026-01-08 00:24:48.686103 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:24:48.686120 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:24:48.686130 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:24:48.686141 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:24:48.686152 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:24:48.686163 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:24:48.686174 | orchestrator | 2026-01-08 00:24:48.686185 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-08 00:24:48.686196 | orchestrator | 2026-01-08 00:24:48.686207 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-08 00:24:48.686218 | orchestrator | Thursday 08 January 2026 00:24:41 +0000 (0:00:00.821) 0:00:04.316 ****** 2026-01-08 00:24:48.686229 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:24:48.686240 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:24:48.686251 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:24:48.686261 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:24:48.686272 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:24:48.686283 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:24:48.686293 | orchestrator | 2026-01-08 00:24:48.686304 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-08 00:24:48.686315 | orchestrator | Thursday 08 January 2026 00:24:41 +0000 (0:00:00.178) 0:00:04.494 ****** 2026-01-08 00:24:48.686327 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:24:48.686337 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:24:48.686348 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:24:48.686358 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:24:48.686369 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:24:48.686380 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:24:48.686391 | orchestrator | 2026-01-08 00:24:48.686401 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-08 00:24:48.686422 | orchestrator | Thursday 08 January 2026 00:24:41 +0000 (0:00:00.175) 0:00:04.670 ****** 2026-01-08 00:24:48.686438 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:24:48.686451 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:24:48.686462 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:24:48.686472 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:24:48.686483 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:24:48.686494 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:24:48.686505 | orchestrator | 2026-01-08 00:24:48.686516 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-08 00:24:48.686527 | orchestrator | Thursday 08 January 2026 00:24:41 +0000 (0:00:00.574) 0:00:05.244 ****** 2026-01-08 00:24:48.686538 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:24:48.686549 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:24:48.686560 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:24:48.686593 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:24:48.686605 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:24:48.686616 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:24:48.686627 | orchestrator | 2026-01-08 00:24:48.686638 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-08 00:24:48.686649 | orchestrator | Thursday 08 January 2026 00:24:42 +0000 (0:00:00.864) 0:00:06.109 ****** 2026-01-08 00:24:48.686660 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-08 00:24:48.686672 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-08 00:24:48.686683 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-08 00:24:48.686694 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-08 00:24:48.686704 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-08 00:24:48.686715 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-08 00:24:48.686726 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-08 00:24:48.686737 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-08 00:24:48.686748 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-08 00:24:48.686759 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-08 00:24:48.686777 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-08 00:24:48.686788 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-08 00:24:48.686800 | orchestrator | 2026-01-08 00:24:48.686811 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-08 00:24:48.686822 | orchestrator | Thursday 08 January 2026 00:24:43 +0000 (0:00:01.154) 0:00:07.263 ****** 2026-01-08 00:24:48.686833 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:24:48.686844 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:24:48.686854 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:24:48.686865 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:24:48.686876 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:24:48.686887 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:24:48.686898 | orchestrator | 2026-01-08 00:24:48.686909 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-08 00:24:48.686921 | orchestrator | Thursday 08 January 2026 00:24:45 +0000 (0:00:01.178) 0:00:08.442 ****** 2026-01-08 00:24:48.686932 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-08 00:24:48.686943 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-08 00:24:48.686954 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-08 00:24:48.686965 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-08 00:24:48.686994 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-08 00:24:48.687006 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-08 00:24:48.687017 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-08 00:24:48.687028 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-08 00:24:48.687039 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-08 00:24:48.687050 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-08 00:24:48.687061 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-08 00:24:48.687072 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-08 00:24:48.687083 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-08 00:24:48.687094 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-08 00:24:48.687105 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-08 00:24:48.687116 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-08 00:24:48.687126 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-08 00:24:48.687137 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-08 00:24:48.687148 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-08 00:24:48.687159 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-08 00:24:48.687170 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-08 00:24:48.687181 | orchestrator | 2026-01-08 00:24:48.687192 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-08 00:24:48.687203 | orchestrator | Thursday 08 January 2026 00:24:46 +0000 (0:00:01.270) 0:00:09.713 ****** 2026-01-08 00:24:48.687214 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:24:48.687225 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:24:48.687236 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:24:48.687247 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:24:48.687258 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:24:48.687268 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:24:48.687279 | orchestrator | 2026-01-08 00:24:48.687290 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-08 00:24:48.687301 | orchestrator | Thursday 08 January 2026 00:24:46 +0000 (0:00:00.168) 0:00:09.881 ****** 2026-01-08 00:24:48.687318 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:24:48.687329 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:24:48.687345 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:24:48.687356 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:24:48.687367 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:24:48.687378 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:24:48.687389 | orchestrator | 2026-01-08 00:24:48.687400 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-08 00:24:48.687411 | orchestrator | Thursday 08 January 2026 00:24:46 +0000 (0:00:00.192) 0:00:10.074 ****** 2026-01-08 00:24:48.687422 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:24:48.687433 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:24:48.687444 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:24:48.687455 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:24:48.687466 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:24:48.687477 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:24:48.687488 | orchestrator | 2026-01-08 00:24:48.687499 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-08 00:24:48.687510 | orchestrator | Thursday 08 January 2026 00:24:47 +0000 (0:00:00.663) 0:00:10.737 ****** 2026-01-08 00:24:48.687521 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:24:48.687532 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:24:48.687543 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:24:48.687554 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:24:48.687581 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:24:48.687593 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:24:48.687604 | orchestrator | 2026-01-08 00:24:48.687616 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-08 00:24:48.687627 | orchestrator | Thursday 08 January 2026 00:24:47 +0000 (0:00:00.192) 0:00:10.929 ****** 2026-01-08 00:24:48.687638 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-08 00:24:48.687649 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-08 00:24:48.687660 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:24:48.687670 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:24:48.687681 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-08 00:24:48.687692 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-08 00:24:48.687703 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:24:48.687714 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:24:48.687725 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-08 00:24:48.687736 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-08 00:24:48.687747 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:24:48.687758 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:24:48.687769 | orchestrator | 2026-01-08 00:24:48.687780 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-08 00:24:48.687791 | orchestrator | Thursday 08 January 2026 00:24:48 +0000 (0:00:00.707) 0:00:11.637 ****** 2026-01-08 00:24:48.687802 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:24:48.687813 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:24:48.687824 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:24:48.687835 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:24:48.687846 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:24:48.687857 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:24:48.687868 | orchestrator | 2026-01-08 00:24:48.687879 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-08 00:24:48.687890 | orchestrator | Thursday 08 January 2026 00:24:48 +0000 (0:00:00.171) 0:00:11.808 ****** 2026-01-08 00:24:48.687901 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:24:48.687912 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:24:48.687923 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:24:48.687934 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:24:48.687952 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:24:50.111723 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:24:50.111838 | orchestrator | 2026-01-08 00:24:50.111854 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-08 00:24:50.111866 | orchestrator | Thursday 08 January 2026 00:24:48 +0000 (0:00:00.177) 0:00:11.986 ****** 2026-01-08 00:24:50.111875 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:24:50.111884 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:24:50.111893 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:24:50.111902 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:24:50.111911 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:24:50.111920 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:24:50.111928 | orchestrator | 2026-01-08 00:24:50.111938 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-08 00:24:50.111947 | orchestrator | Thursday 08 January 2026 00:24:48 +0000 (0:00:00.173) 0:00:12.160 ****** 2026-01-08 00:24:50.111956 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:24:50.111965 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:24:50.111974 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:24:50.111982 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:24:50.111991 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:24:50.112000 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:24:50.112008 | orchestrator | 2026-01-08 00:24:50.112017 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-08 00:24:50.112026 | orchestrator | Thursday 08 January 2026 00:24:49 +0000 (0:00:00.766) 0:00:12.926 ****** 2026-01-08 00:24:50.112035 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:24:50.112044 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:24:50.112053 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:24:50.112061 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:24:50.112070 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:24:50.112079 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:24:50.112088 | orchestrator | 2026-01-08 00:24:50.112096 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:24:50.112106 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 00:24:50.112117 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 00:24:50.112142 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 00:24:50.112152 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 00:24:50.112160 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 00:24:50.112169 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 00:24:50.112178 | orchestrator | 2026-01-08 00:24:50.112186 | orchestrator | 2026-01-08 00:24:50.112195 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:24:50.112204 | orchestrator | Thursday 08 January 2026 00:24:49 +0000 (0:00:00.226) 0:00:13.153 ****** 2026-01-08 00:24:50.112215 | orchestrator | =============================================================================== 2026-01-08 00:24:50.112225 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2026-01-08 00:24:50.112235 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2026-01-08 00:24:50.112246 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-01-08 00:24:50.112256 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.15s 2026-01-08 00:24:50.112272 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2026-01-08 00:24:50.112282 | orchestrator | Do not require tty for all users ---------------------------------------- 0.82s 2026-01-08 00:24:50.112292 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.77s 2026-01-08 00:24:50.112301 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-01-08 00:24:50.112312 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.66s 2026-01-08 00:24:50.112321 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.57s 2026-01-08 00:24:50.112331 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-01-08 00:24:50.112341 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-01-08 00:24:50.112351 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-01-08 00:24:50.112361 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2026-01-08 00:24:50.112371 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-01-08 00:24:50.112381 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-01-08 00:24:50.112391 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-01-08 00:24:50.112401 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-01-08 00:24:50.112411 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2026-01-08 00:24:50.434414 | orchestrator | + osism apply --environment custom facts 2026-01-08 00:24:52.389448 | orchestrator | 2026-01-08 00:24:52 | INFO  | Trying to run play facts in environment custom 2026-01-08 00:25:02.563319 | orchestrator | 2026-01-08 00:25:02 | INFO  | Task d2750909-b7e0-49ca-a0dd-330d511ee55b (facts) was prepared for execution. 2026-01-08 00:25:02.563453 | orchestrator | 2026-01-08 00:25:02 | INFO  | It takes a moment until task d2750909-b7e0-49ca-a0dd-330d511ee55b (facts) has been started and output is visible here. 2026-01-08 00:25:46.912363 | orchestrator | 2026-01-08 00:25:46.912474 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-08 00:25:46.912483 | orchestrator | 2026-01-08 00:25:46.912491 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-08 00:25:46.912498 | orchestrator | Thursday 08 January 2026 00:25:06 +0000 (0:00:00.098) 0:00:00.099 ****** 2026-01-08 00:25:46.912505 | orchestrator | ok: [testbed-manager] 2026-01-08 00:25:46.912513 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:25:46.912521 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:25:46.912528 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:25:46.912535 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:25:46.912577 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:25:46.912584 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:25:46.912591 | orchestrator | 2026-01-08 00:25:46.912598 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-08 00:25:46.912605 | orchestrator | Thursday 08 January 2026 00:25:08 +0000 (0:00:01.421) 0:00:01.520 ****** 2026-01-08 00:25:46.912612 | orchestrator | ok: [testbed-manager] 2026-01-08 00:25:46.912619 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:25:46.912625 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:25:46.912632 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:25:46.912639 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:25:46.912646 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:25:46.912653 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:25:46.912660 | orchestrator | 2026-01-08 00:25:46.912666 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-08 00:25:46.912672 | orchestrator | 2026-01-08 00:25:46.912678 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-08 00:25:46.912708 | orchestrator | Thursday 08 January 2026 00:25:09 +0000 (0:00:01.180) 0:00:02.701 ****** 2026-01-08 00:25:46.912715 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:25:46.912722 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:25:46.912728 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:25:46.912735 | orchestrator | 2026-01-08 00:25:46.912753 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-08 00:25:46.912761 | orchestrator | Thursday 08 January 2026 00:25:09 +0000 (0:00:00.108) 0:00:02.809 ****** 2026-01-08 00:25:46.912767 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:25:46.912774 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:25:46.912780 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:25:46.912787 | orchestrator | 2026-01-08 00:25:46.912793 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-08 00:25:46.912800 | orchestrator | Thursday 08 January 2026 00:25:09 +0000 (0:00:00.210) 0:00:03.020 ****** 2026-01-08 00:25:46.912806 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:25:46.912812 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:25:46.912819 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:25:46.912825 | orchestrator | 2026-01-08 00:25:46.912832 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-08 00:25:46.912838 | orchestrator | Thursday 08 January 2026 00:25:09 +0000 (0:00:00.219) 0:00:03.240 ****** 2026-01-08 00:25:46.912847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:25:46.912855 | orchestrator | 2026-01-08 00:25:46.912861 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-08 00:25:46.912868 | orchestrator | Thursday 08 January 2026 00:25:10 +0000 (0:00:00.151) 0:00:03.392 ****** 2026-01-08 00:25:46.912874 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:25:46.912880 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:25:46.912887 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:25:46.912893 | orchestrator | 2026-01-08 00:25:46.912899 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-08 00:25:46.912906 | orchestrator | Thursday 08 January 2026 00:25:10 +0000 (0:00:00.431) 0:00:03.823 ****** 2026-01-08 00:25:46.912912 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:25:46.912919 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:25:46.912926 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:25:46.912933 | orchestrator | 2026-01-08 00:25:46.912940 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-08 00:25:46.912947 | orchestrator | Thursday 08 January 2026 00:25:10 +0000 (0:00:00.140) 0:00:03.963 ****** 2026-01-08 00:25:46.912954 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:25:46.912961 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:25:46.912968 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:25:46.912976 | orchestrator | 2026-01-08 00:25:46.912982 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-08 00:25:46.912990 | orchestrator | Thursday 08 January 2026 00:25:11 +0000 (0:00:01.059) 0:00:05.023 ****** 2026-01-08 00:25:46.912997 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:25:46.913004 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:25:46.913011 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:25:46.913018 | orchestrator | 2026-01-08 00:25:46.913025 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-08 00:25:46.913032 | orchestrator | Thursday 08 January 2026 00:25:12 +0000 (0:00:00.442) 0:00:05.466 ****** 2026-01-08 00:25:46.913039 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:25:46.913047 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:25:46.913054 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:25:46.913061 | orchestrator | 2026-01-08 00:25:46.913068 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-08 00:25:46.913075 | orchestrator | Thursday 08 January 2026 00:25:13 +0000 (0:00:01.137) 0:00:06.603 ****** 2026-01-08 00:25:46.913082 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:25:46.913094 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:25:46.913101 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:25:46.913108 | orchestrator | 2026-01-08 00:25:46.913115 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-08 00:25:46.913122 | orchestrator | Thursday 08 January 2026 00:25:29 +0000 (0:00:15.739) 0:00:22.343 ****** 2026-01-08 00:25:46.913129 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:25:46.913136 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:25:46.913142 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:25:46.913149 | orchestrator | 2026-01-08 00:25:46.913155 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-08 00:25:46.913175 | orchestrator | Thursday 08 January 2026 00:25:29 +0000 (0:00:00.113) 0:00:22.456 ****** 2026-01-08 00:25:46.913181 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:25:46.913188 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:25:46.913194 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:25:46.913200 | orchestrator | 2026-01-08 00:25:46.913207 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-08 00:25:46.913213 | orchestrator | Thursday 08 January 2026 00:25:37 +0000 (0:00:08.377) 0:00:30.833 ****** 2026-01-08 00:25:46.913219 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:25:46.913226 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:25:46.913232 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:25:46.913238 | orchestrator | 2026-01-08 00:25:46.913244 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-08 00:25:46.913251 | orchestrator | Thursday 08 January 2026 00:25:38 +0000 (0:00:00.464) 0:00:31.298 ****** 2026-01-08 00:25:46.913258 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-08 00:25:46.913264 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-08 00:25:46.913271 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-08 00:25:46.913277 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-08 00:25:46.913283 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-08 00:25:46.913289 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-08 00:25:46.913295 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-08 00:25:46.913302 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-08 00:25:46.913308 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-08 00:25:46.913315 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-08 00:25:46.913321 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-08 00:25:46.913328 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-08 00:25:46.913334 | orchestrator | 2026-01-08 00:25:46.913341 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-08 00:25:46.913347 | orchestrator | Thursday 08 January 2026 00:25:41 +0000 (0:00:03.636) 0:00:34.934 ****** 2026-01-08 00:25:46.913354 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:25:46.913360 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:25:46.913366 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:25:46.913373 | orchestrator | 2026-01-08 00:25:46.913379 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-08 00:25:46.913385 | orchestrator | 2026-01-08 00:25:46.913392 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-08 00:25:46.913398 | orchestrator | Thursday 08 January 2026 00:25:43 +0000 (0:00:01.515) 0:00:36.450 ****** 2026-01-08 00:25:46.913405 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:25:46.913411 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:25:46.913418 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:25:46.913424 | orchestrator | ok: [testbed-manager] 2026-01-08 00:25:46.913430 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:25:46.913442 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:25:46.913448 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:25:46.913454 | orchestrator | 2026-01-08 00:25:46.913461 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:25:46.913499 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:25:46.913507 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:25:46.913514 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:25:46.913521 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:25:46.913527 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:25:46.913534 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:25:46.913582 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:25:46.913589 | orchestrator | 2026-01-08 00:25:46.913596 | orchestrator | 2026-01-08 00:25:46.913602 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:25:46.913609 | orchestrator | Thursday 08 January 2026 00:25:46 +0000 (0:00:03.711) 0:00:40.161 ****** 2026-01-08 00:25:46.913615 | orchestrator | =============================================================================== 2026-01-08 00:25:46.913622 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.74s 2026-01-08 00:25:46.913628 | orchestrator | Install required packages (Debian) -------------------------------------- 8.38s 2026-01-08 00:25:46.913635 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.71s 2026-01-08 00:25:46.913641 | orchestrator | Copy fact files --------------------------------------------------------- 3.64s 2026-01-08 00:25:46.913648 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.52s 2026-01-08 00:25:46.913654 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-01-08 00:25:46.913665 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-01-08 00:25:47.145200 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.14s 2026-01-08 00:25:47.145304 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-01-08 00:25:47.145320 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-01-08 00:25:47.145335 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-01-08 00:25:47.145353 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-01-08 00:25:47.145365 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-08 00:25:47.145376 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-01-08 00:25:47.145387 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-01-08 00:25:47.145398 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-01-08 00:25:47.145409 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-01-08 00:25:47.145420 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-01-08 00:25:47.461185 | orchestrator | + osism apply bootstrap 2026-01-08 00:25:59.599189 | orchestrator | 2026-01-08 00:25:59 | INFO  | Task cb848e56-58dc-4848-b9e7-214cbbaea48b (bootstrap) was prepared for execution. 2026-01-08 00:25:59.599330 | orchestrator | 2026-01-08 00:25:59 | INFO  | It takes a moment until task cb848e56-58dc-4848-b9e7-214cbbaea48b (bootstrap) has been started and output is visible here. 2026-01-08 00:26:15.898285 | orchestrator | 2026-01-08 00:26:15.898408 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-08 00:26:15.898427 | orchestrator | 2026-01-08 00:26:15.898440 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-08 00:26:15.898452 | orchestrator | Thursday 08 January 2026 00:26:03 +0000 (0:00:00.152) 0:00:00.152 ****** 2026-01-08 00:26:15.898463 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:15.898476 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:15.898487 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:15.898498 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:15.898508 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:15.898520 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:15.898580 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:15.898594 | orchestrator | 2026-01-08 00:26:15.898605 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-08 00:26:15.898616 | orchestrator | 2026-01-08 00:26:15.898627 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-08 00:26:15.898638 | orchestrator | Thursday 08 January 2026 00:26:04 +0000 (0:00:00.277) 0:00:00.430 ****** 2026-01-08 00:26:15.898649 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:15.898660 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:15.898672 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:15.898683 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:15.898694 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:15.898704 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:15.898715 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:15.898726 | orchestrator | 2026-01-08 00:26:15.898737 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-08 00:26:15.898748 | orchestrator | 2026-01-08 00:26:15.898759 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-08 00:26:15.898770 | orchestrator | Thursday 08 January 2026 00:26:08 +0000 (0:00:03.869) 0:00:04.300 ****** 2026-01-08 00:26:15.898782 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-08 00:26:15.898793 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-08 00:26:15.898826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-08 00:26:15.898840 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-08 00:26:15.898853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:26:15.898866 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-08 00:26:15.898879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:26:15.898891 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-08 00:26:15.898904 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-08 00:26:15.898916 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-08 00:26:15.898929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:26:15.898941 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-08 00:26:15.898954 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-08 00:26:15.898967 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-08 00:26:15.898980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-08 00:26:15.898993 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:26:15.899005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-08 00:26:15.899017 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-08 00:26:15.899029 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-08 00:26:15.899042 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-08 00:26:15.899081 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-08 00:26:15.899094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-08 00:26:15.899107 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-08 00:26:15.899119 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-08 00:26:15.899131 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-08 00:26:15.899144 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-08 00:26:15.899157 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-08 00:26:15.899169 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-08 00:26:15.899181 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-08 00:26:15.899194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-08 00:26:15.899207 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:26:15.899218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-08 00:26:15.899229 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-08 00:26:15.899239 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-08 00:26:15.899250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-08 00:26:15.899261 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-08 00:26:15.899271 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-08 00:26:15.899282 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-08 00:26:15.899293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-08 00:26:15.899303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-08 00:26:15.899314 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-08 00:26:15.899324 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-08 00:26:15.899335 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-08 00:26:15.899345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-08 00:26:15.899356 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-08 00:26:15.899367 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-08 00:26:15.899396 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:26:15.899407 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-08 00:26:15.899418 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-08 00:26:15.899429 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-08 00:26:15.899440 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:26:15.899450 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:26:15.899461 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-08 00:26:15.899472 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:26:15.899482 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-08 00:26:15.899493 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:26:15.899504 | orchestrator | 2026-01-08 00:26:15.899515 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-08 00:26:15.899525 | orchestrator | 2026-01-08 00:26:15.899558 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-08 00:26:15.899570 | orchestrator | Thursday 08 January 2026 00:26:08 +0000 (0:00:00.418) 0:00:04.718 ****** 2026-01-08 00:26:15.899581 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:15.899591 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:15.899602 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:15.899613 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:15.899624 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:15.899634 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:15.899645 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:15.899656 | orchestrator | 2026-01-08 00:26:15.899666 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-08 00:26:15.899687 | orchestrator | Thursday 08 January 2026 00:26:09 +0000 (0:00:01.231) 0:00:05.949 ****** 2026-01-08 00:26:15.899697 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:15.899708 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:15.899719 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:15.899729 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:15.899740 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:15.899751 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:15.899761 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:15.899772 | orchestrator | 2026-01-08 00:26:15.899783 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-08 00:26:15.899807 | orchestrator | Thursday 08 January 2026 00:26:10 +0000 (0:00:01.186) 0:00:07.136 ****** 2026-01-08 00:26:15.899820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:26:15.899834 | orchestrator | 2026-01-08 00:26:15.899845 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-08 00:26:15.899856 | orchestrator | Thursday 08 January 2026 00:26:11 +0000 (0:00:00.299) 0:00:07.435 ****** 2026-01-08 00:26:15.899866 | orchestrator | changed: [testbed-manager] 2026-01-08 00:26:15.899888 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:26:15.899900 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:26:15.899911 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:26:15.899922 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:26:15.899933 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:26:15.899943 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:26:15.899954 | orchestrator | 2026-01-08 00:26:15.899965 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-08 00:26:15.899976 | orchestrator | Thursday 08 January 2026 00:26:13 +0000 (0:00:02.122) 0:00:09.557 ****** 2026-01-08 00:26:15.899987 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:26:15.899999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:26:15.900011 | orchestrator | 2026-01-08 00:26:15.900022 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-08 00:26:15.900033 | orchestrator | Thursday 08 January 2026 00:26:13 +0000 (0:00:00.321) 0:00:09.878 ****** 2026-01-08 00:26:15.900044 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:26:15.900054 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:26:15.900065 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:26:15.900076 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:26:15.900086 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:26:15.900097 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:26:15.900108 | orchestrator | 2026-01-08 00:26:15.900118 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-08 00:26:15.900129 | orchestrator | Thursday 08 January 2026 00:26:14 +0000 (0:00:01.027) 0:00:10.906 ****** 2026-01-08 00:26:15.900140 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:26:15.900151 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:26:15.900162 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:26:15.900172 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:26:15.900192 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:26:15.900204 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:26:15.900215 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:26:15.900225 | orchestrator | 2026-01-08 00:26:15.900236 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-08 00:26:15.900247 | orchestrator | Thursday 08 January 2026 00:26:15 +0000 (0:00:00.567) 0:00:11.473 ****** 2026-01-08 00:26:15.900258 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:26:15.900276 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:26:15.900286 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:26:15.900297 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:26:15.900307 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:26:15.900318 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:26:15.900329 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:15.900340 | orchestrator | 2026-01-08 00:26:15.900351 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-08 00:26:15.900362 | orchestrator | Thursday 08 January 2026 00:26:15 +0000 (0:00:00.429) 0:00:11.903 ****** 2026-01-08 00:26:15.900374 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:26:15.900385 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:26:15.900407 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:26:28.345117 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:26:28.345248 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:26:28.345272 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:26:28.345290 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:26:28.345307 | orchestrator | 2026-01-08 00:26:28.345326 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-08 00:26:28.345345 | orchestrator | Thursday 08 January 2026 00:26:15 +0000 (0:00:00.236) 0:00:12.140 ****** 2026-01-08 00:26:28.345364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:26:28.345404 | orchestrator | 2026-01-08 00:26:28.345423 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-08 00:26:28.345442 | orchestrator | Thursday 08 January 2026 00:26:16 +0000 (0:00:00.302) 0:00:12.442 ****** 2026-01-08 00:26:28.345461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:26:28.345480 | orchestrator | 2026-01-08 00:26:28.345497 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-08 00:26:28.345514 | orchestrator | Thursday 08 January 2026 00:26:16 +0000 (0:00:00.311) 0:00:12.754 ****** 2026-01-08 00:26:28.345566 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.345586 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.345603 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:28.345622 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:28.345640 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.345659 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:28.345676 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.345695 | orchestrator | 2026-01-08 00:26:28.345715 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-08 00:26:28.345734 | orchestrator | Thursday 08 January 2026 00:26:17 +0000 (0:00:01.382) 0:00:14.136 ****** 2026-01-08 00:26:28.345754 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:26:28.345774 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:26:28.345792 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:26:28.345810 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:26:28.345828 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:26:28.345847 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:26:28.345866 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:26:28.345884 | orchestrator | 2026-01-08 00:26:28.345904 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-08 00:26:28.345924 | orchestrator | Thursday 08 January 2026 00:26:18 +0000 (0:00:00.297) 0:00:14.434 ****** 2026-01-08 00:26:28.345942 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.345961 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.345979 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.345997 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.346082 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:28.346139 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:28.346158 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:28.346176 | orchestrator | 2026-01-08 00:26:28.346194 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-08 00:26:28.346212 | orchestrator | Thursday 08 January 2026 00:26:18 +0000 (0:00:00.524) 0:00:14.959 ****** 2026-01-08 00:26:28.346230 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:26:28.346249 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:26:28.346265 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:26:28.346282 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:26:28.346299 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:26:28.346316 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:26:28.346332 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:26:28.346350 | orchestrator | 2026-01-08 00:26:28.346369 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-08 00:26:28.346388 | orchestrator | Thursday 08 January 2026 00:26:19 +0000 (0:00:00.259) 0:00:15.219 ****** 2026-01-08 00:26:28.346406 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.346423 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:26:28.346442 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:26:28.346461 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:26:28.346477 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:26:28.346494 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:26:28.346513 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:26:28.346601 | orchestrator | 2026-01-08 00:26:28.346622 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-08 00:26:28.346640 | orchestrator | Thursday 08 January 2026 00:26:19 +0000 (0:00:00.554) 0:00:15.773 ****** 2026-01-08 00:26:28.346657 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.346673 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:26:28.346691 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:26:28.346709 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:26:28.346726 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:26:28.346744 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:26:28.346762 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:26:28.346782 | orchestrator | 2026-01-08 00:26:28.346801 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-08 00:26:28.346819 | orchestrator | Thursday 08 January 2026 00:26:20 +0000 (0:00:01.253) 0:00:17.026 ****** 2026-01-08 00:26:28.346830 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:28.346841 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.346851 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.346863 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.346874 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:28.346885 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:28.346895 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.346906 | orchestrator | 2026-01-08 00:26:28.346917 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-08 00:26:28.346929 | orchestrator | Thursday 08 January 2026 00:26:21 +0000 (0:00:01.122) 0:00:18.149 ****** 2026-01-08 00:26:28.346983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:26:28.346998 | orchestrator | 2026-01-08 00:26:28.347009 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-08 00:26:28.347020 | orchestrator | Thursday 08 January 2026 00:26:22 +0000 (0:00:00.326) 0:00:18.476 ****** 2026-01-08 00:26:28.347030 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:26:28.347041 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:26:28.347052 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:26:28.347063 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:26:28.347073 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:26:28.347098 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:26:28.347109 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:26:28.347120 | orchestrator | 2026-01-08 00:26:28.347131 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-08 00:26:28.347142 | orchestrator | Thursday 08 January 2026 00:26:23 +0000 (0:00:01.303) 0:00:19.779 ****** 2026-01-08 00:26:28.347152 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.347163 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.347174 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.347184 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.347195 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:28.347206 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:28.347216 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:28.347227 | orchestrator | 2026-01-08 00:26:28.347238 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-08 00:26:28.347249 | orchestrator | Thursday 08 January 2026 00:26:23 +0000 (0:00:00.247) 0:00:20.027 ****** 2026-01-08 00:26:28.347259 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.347269 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.347279 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.347289 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.347298 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:28.347308 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:28.347317 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:28.347326 | orchestrator | 2026-01-08 00:26:28.347336 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-08 00:26:28.347346 | orchestrator | Thursday 08 January 2026 00:26:24 +0000 (0:00:00.244) 0:00:20.272 ****** 2026-01-08 00:26:28.347355 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.347367 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.347384 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.347400 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.347416 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:28.347433 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:28.347449 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:28.347467 | orchestrator | 2026-01-08 00:26:28.347483 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-08 00:26:28.347500 | orchestrator | Thursday 08 January 2026 00:26:24 +0000 (0:00:00.217) 0:00:20.489 ****** 2026-01-08 00:26:28.347511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:26:28.347569 | orchestrator | 2026-01-08 00:26:28.347581 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-08 00:26:28.347591 | orchestrator | Thursday 08 January 2026 00:26:24 +0000 (0:00:00.294) 0:00:20.783 ****** 2026-01-08 00:26:28.347600 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.347610 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.347620 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.347629 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:28.347639 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.347649 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:28.347658 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:28.347667 | orchestrator | 2026-01-08 00:26:28.347677 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-08 00:26:28.347687 | orchestrator | Thursday 08 January 2026 00:26:25 +0000 (0:00:00.604) 0:00:21.388 ****** 2026-01-08 00:26:28.347697 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:26:28.347706 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:26:28.347716 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:26:28.347726 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:26:28.347735 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:26:28.347744 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:26:28.347754 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:26:28.347773 | orchestrator | 2026-01-08 00:26:28.347783 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-08 00:26:28.347792 | orchestrator | Thursday 08 January 2026 00:26:25 +0000 (0:00:00.249) 0:00:21.638 ****** 2026-01-08 00:26:28.347802 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.347812 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.347821 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.347831 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.347840 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:26:28.347850 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:26:28.347859 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:26:28.347869 | orchestrator | 2026-01-08 00:26:28.347879 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-08 00:26:28.347889 | orchestrator | Thursday 08 January 2026 00:26:26 +0000 (0:00:01.118) 0:00:22.757 ****** 2026-01-08 00:26:28.347899 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.347908 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.347918 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.347927 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.347937 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:26:28.347946 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:26:28.347955 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:26:28.347965 | orchestrator | 2026-01-08 00:26:28.347974 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-08 00:26:28.347984 | orchestrator | Thursday 08 January 2026 00:26:27 +0000 (0:00:00.557) 0:00:23.315 ****** 2026-01-08 00:26:28.347994 | orchestrator | ok: [testbed-manager] 2026-01-08 00:26:28.348003 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:26:28.348013 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:26:28.348023 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:26:28.348043 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:27:10.358574 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:27:10.358664 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:27:10.358674 | orchestrator | 2026-01-08 00:27:10.358682 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-08 00:27:10.358690 | orchestrator | Thursday 08 January 2026 00:26:28 +0000 (0:00:01.174) 0:00:24.489 ****** 2026-01-08 00:27:10.358696 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.358703 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.358709 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.358715 | orchestrator | changed: [testbed-manager] 2026-01-08 00:27:10.358721 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:27:10.358727 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:27:10.358732 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:27:10.358738 | orchestrator | 2026-01-08 00:27:10.358745 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-08 00:27:10.358751 | orchestrator | Thursday 08 January 2026 00:26:44 +0000 (0:00:16.130) 0:00:40.619 ****** 2026-01-08 00:27:10.358757 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.358763 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.358769 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.358775 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.358781 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.358787 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.358793 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.358798 | orchestrator | 2026-01-08 00:27:10.358804 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-08 00:27:10.358810 | orchestrator | Thursday 08 January 2026 00:26:44 +0000 (0:00:00.243) 0:00:40.863 ****** 2026-01-08 00:27:10.358816 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.358821 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.358827 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.358833 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.358838 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.358844 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.358850 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.358874 | orchestrator | 2026-01-08 00:27:10.358880 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-08 00:27:10.358886 | orchestrator | Thursday 08 January 2026 00:26:44 +0000 (0:00:00.214) 0:00:41.078 ****** 2026-01-08 00:27:10.358892 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.358898 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.358903 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.358909 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.358915 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.358920 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.358926 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.358932 | orchestrator | 2026-01-08 00:27:10.358938 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-08 00:27:10.358944 | orchestrator | Thursday 08 January 2026 00:26:45 +0000 (0:00:00.266) 0:00:41.344 ****** 2026-01-08 00:27:10.358950 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:27:10.358958 | orchestrator | 2026-01-08 00:27:10.358964 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-08 00:27:10.358970 | orchestrator | Thursday 08 January 2026 00:26:45 +0000 (0:00:00.291) 0:00:41.636 ****** 2026-01-08 00:27:10.358975 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.358981 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.358987 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.358993 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.358998 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.359004 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.359010 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.359016 | orchestrator | 2026-01-08 00:27:10.359021 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-08 00:27:10.359027 | orchestrator | Thursday 08 January 2026 00:26:47 +0000 (0:00:01.627) 0:00:43.264 ****** 2026-01-08 00:27:10.359033 | orchestrator | changed: [testbed-manager] 2026-01-08 00:27:10.359039 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:27:10.359060 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:27:10.359066 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:27:10.359072 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:27:10.359078 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:27:10.359083 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:27:10.359089 | orchestrator | 2026-01-08 00:27:10.359096 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-08 00:27:10.359103 | orchestrator | Thursday 08 January 2026 00:26:48 +0000 (0:00:01.113) 0:00:44.377 ****** 2026-01-08 00:27:10.359110 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.359116 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.359123 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.359130 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.359137 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.359143 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.359150 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.359156 | orchestrator | 2026-01-08 00:27:10.359163 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-08 00:27:10.359170 | orchestrator | Thursday 08 January 2026 00:26:49 +0000 (0:00:01.664) 0:00:46.042 ****** 2026-01-08 00:27:10.359177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:27:10.359185 | orchestrator | 2026-01-08 00:27:10.359192 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-08 00:27:10.359200 | orchestrator | Thursday 08 January 2026 00:26:50 +0000 (0:00:00.285) 0:00:46.328 ****** 2026-01-08 00:27:10.359206 | orchestrator | changed: [testbed-manager] 2026-01-08 00:27:10.359219 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:27:10.359225 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:27:10.359232 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:27:10.359239 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:27:10.359246 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:27:10.359256 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:27:10.359262 | orchestrator | 2026-01-08 00:27:10.359281 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-08 00:27:10.359288 | orchestrator | Thursday 08 January 2026 00:26:51 +0000 (0:00:01.108) 0:00:47.436 ****** 2026-01-08 00:27:10.359295 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:27:10.359302 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:27:10.359309 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:27:10.359315 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:27:10.359322 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:27:10.359329 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:27:10.359334 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:27:10.359340 | orchestrator | 2026-01-08 00:27:10.359346 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-08 00:27:10.359352 | orchestrator | Thursday 08 January 2026 00:26:51 +0000 (0:00:00.214) 0:00:47.650 ****** 2026-01-08 00:27:10.359358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:27:10.359364 | orchestrator | 2026-01-08 00:27:10.359370 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-08 00:27:10.359376 | orchestrator | Thursday 08 January 2026 00:26:51 +0000 (0:00:00.319) 0:00:47.970 ****** 2026-01-08 00:27:10.359381 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.359387 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.359393 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.359399 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.359405 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.359410 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.359416 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.359422 | orchestrator | 2026-01-08 00:27:10.359427 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-08 00:27:10.359433 | orchestrator | Thursday 08 January 2026 00:26:53 +0000 (0:00:01.705) 0:00:49.676 ****** 2026-01-08 00:27:10.359439 | orchestrator | changed: [testbed-manager] 2026-01-08 00:27:10.359445 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:27:10.359451 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:27:10.359456 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:27:10.359462 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:27:10.359468 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:27:10.359474 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:27:10.359479 | orchestrator | 2026-01-08 00:27:10.359485 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-08 00:27:10.359491 | orchestrator | Thursday 08 January 2026 00:26:54 +0000 (0:00:01.196) 0:00:50.872 ****** 2026-01-08 00:27:10.359497 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:27:10.359503 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:27:10.359547 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:27:10.359554 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:27:10.359560 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:27:10.359566 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:27:10.359572 | orchestrator | changed: [testbed-manager] 2026-01-08 00:27:10.359577 | orchestrator | 2026-01-08 00:27:10.359583 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-08 00:27:10.359589 | orchestrator | Thursday 08 January 2026 00:27:07 +0000 (0:00:12.843) 0:01:03.715 ****** 2026-01-08 00:27:10.359595 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.359606 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.359612 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.359617 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.359623 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.359629 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.359635 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.359640 | orchestrator | 2026-01-08 00:27:10.359646 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-08 00:27:10.359652 | orchestrator | Thursday 08 January 2026 00:27:08 +0000 (0:00:01.106) 0:01:04.822 ****** 2026-01-08 00:27:10.359658 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.359663 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.359669 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.359675 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.359680 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.359686 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.359692 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.359698 | orchestrator | 2026-01-08 00:27:10.359703 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-08 00:27:10.359709 | orchestrator | Thursday 08 January 2026 00:27:09 +0000 (0:00:00.891) 0:01:05.714 ****** 2026-01-08 00:27:10.359715 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.359721 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.359726 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.359732 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.359738 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.359743 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.359749 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.359755 | orchestrator | 2026-01-08 00:27:10.359761 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-08 00:27:10.359766 | orchestrator | Thursday 08 January 2026 00:27:09 +0000 (0:00:00.237) 0:01:05.951 ****** 2026-01-08 00:27:10.359772 | orchestrator | ok: [testbed-manager] 2026-01-08 00:27:10.359778 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:27:10.359784 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:27:10.359790 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:27:10.359795 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:27:10.359801 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:27:10.359806 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:27:10.359812 | orchestrator | 2026-01-08 00:27:10.359818 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-08 00:27:10.359824 | orchestrator | Thursday 08 January 2026 00:27:10 +0000 (0:00:00.239) 0:01:06.191 ****** 2026-01-08 00:27:10.359830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:27:10.359836 | orchestrator | 2026-01-08 00:27:10.359850 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-08 00:29:48.004018 | orchestrator | Thursday 08 January 2026 00:27:10 +0000 (0:00:00.315) 0:01:06.506 ****** 2026-01-08 00:29:48.004108 | orchestrator | ok: [testbed-manager] 2026-01-08 00:29:48.004118 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:29:48.004125 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:29:48.004132 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:29:48.004138 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:29:48.004144 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:29:48.004150 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:29:48.004156 | orchestrator | 2026-01-08 00:29:48.004163 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-08 00:29:48.004169 | orchestrator | Thursday 08 January 2026 00:27:12 +0000 (0:00:01.836) 0:01:08.342 ****** 2026-01-08 00:29:48.004175 | orchestrator | changed: [testbed-manager] 2026-01-08 00:29:48.004182 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:29:48.004187 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:29:48.004193 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:29:48.004217 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:29:48.004223 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:29:48.004229 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:29:48.004235 | orchestrator | 2026-01-08 00:29:48.004241 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-08 00:29:48.004248 | orchestrator | Thursday 08 January 2026 00:27:12 +0000 (0:00:00.608) 0:01:08.951 ****** 2026-01-08 00:29:48.004253 | orchestrator | ok: [testbed-manager] 2026-01-08 00:29:48.004259 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:29:48.004265 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:29:48.004271 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:29:48.004277 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:29:48.004282 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:29:48.004288 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:29:48.004294 | orchestrator | 2026-01-08 00:29:48.004300 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-08 00:29:48.004305 | orchestrator | Thursday 08 January 2026 00:27:13 +0000 (0:00:00.220) 0:01:09.172 ****** 2026-01-08 00:29:48.004311 | orchestrator | ok: [testbed-manager] 2026-01-08 00:29:48.004317 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:29:48.004323 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:29:48.004328 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:29:48.004334 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:29:48.004339 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:29:48.004345 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:29:48.004351 | orchestrator | 2026-01-08 00:29:48.004356 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-08 00:29:48.004362 | orchestrator | Thursday 08 January 2026 00:27:14 +0000 (0:00:01.310) 0:01:10.482 ****** 2026-01-08 00:29:48.004368 | orchestrator | ok: [testbed-manager] 2026-01-08 00:29:48.004374 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:29:48.004379 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:29:48.004385 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:29:48.004391 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:29:48.004396 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:29:48.004402 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:29:48.004407 | orchestrator | 2026-01-08 00:29:48.004413 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-08 00:29:48.004419 | orchestrator | Thursday 08 January 2026 00:27:18 +0000 (0:00:03.804) 0:01:14.287 ****** 2026-01-08 00:29:48.004433 | orchestrator | changed: [testbed-manager] 2026-01-08 00:29:48.004440 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:29:48.004445 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:29:48.004466 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:29:48.004473 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:29:48.004479 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:29:48.004485 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:29:48.004491 | orchestrator | 2026-01-08 00:29:48.004496 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-08 00:29:48.004502 | orchestrator | Thursday 08 January 2026 00:27:36 +0000 (0:00:18.328) 0:01:32.616 ****** 2026-01-08 00:29:48.004508 | orchestrator | ok: [testbed-manager] 2026-01-08 00:29:48.004514 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:29:48.004520 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:29:48.004526 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:29:48.004531 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:29:48.004537 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:29:48.004543 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:29:48.004548 | orchestrator | 2026-01-08 00:29:48.004554 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-08 00:29:48.004560 | orchestrator | Thursday 08 January 2026 00:28:15 +0000 (0:00:38.684) 0:02:11.300 ****** 2026-01-08 00:29:48.004566 | orchestrator | changed: [testbed-manager] 2026-01-08 00:29:48.004571 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:29:48.004577 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:29:48.004589 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:29:48.004595 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:29:48.004601 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:29:48.004606 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:29:48.004612 | orchestrator | 2026-01-08 00:29:48.004618 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-08 00:29:48.004624 | orchestrator | Thursday 08 January 2026 00:29:32 +0000 (0:01:17.088) 0:03:28.389 ****** 2026-01-08 00:29:48.004629 | orchestrator | changed: [testbed-manager] 2026-01-08 00:29:48.004635 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:29:48.004641 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:29:48.004647 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:29:48.004652 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:29:48.004658 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:29:48.004664 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:29:48.004670 | orchestrator | 2026-01-08 00:29:48.004676 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-08 00:29:48.004681 | orchestrator | Thursday 08 January 2026 00:29:34 +0000 (0:00:01.885) 0:03:30.274 ****** 2026-01-08 00:29:48.004687 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:29:48.004693 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:29:48.004699 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:29:48.004705 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:29:48.004710 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:29:48.004716 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:29:48.004722 | orchestrator | changed: [testbed-manager] 2026-01-08 00:29:48.004727 | orchestrator | 2026-01-08 00:29:48.004733 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-08 00:29:48.004751 | orchestrator | Thursday 08 January 2026 00:29:46 +0000 (0:00:12.685) 0:03:42.960 ****** 2026-01-08 00:29:48.004768 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-08 00:29:48.004778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-08 00:29:48.004789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-08 00:29:48.004797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-08 00:29:48.004803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-08 00:29:48.004848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-08 00:29:48.004856 | orchestrator | 2026-01-08 00:29:48.004865 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-08 00:29:48.004871 | orchestrator | Thursday 08 January 2026 00:29:47 +0000 (0:00:00.380) 0:03:43.340 ****** 2026-01-08 00:29:48.004877 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-08 00:29:48.004883 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:29:48.004889 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-08 00:29:48.004895 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:29:48.004901 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-08 00:29:48.004907 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-08 00:29:48.004913 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:29:48.004919 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:29:48.004925 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-08 00:29:48.004931 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-08 00:29:48.004936 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-08 00:29:48.004942 | orchestrator | 2026-01-08 00:29:48.004948 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-08 00:29:48.004954 | orchestrator | Thursday 08 January 2026 00:29:47 +0000 (0:00:00.742) 0:03:44.083 ****** 2026-01-08 00:29:48.004969 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-08 00:29:48.004977 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-08 00:29:48.004982 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-08 00:29:48.004988 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-08 00:29:48.004997 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-08 00:29:48.005008 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-08 00:29:56.841538 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-08 00:29:56.841697 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-08 00:29:56.841726 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-08 00:29:56.841741 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-08 00:29:56.841753 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-08 00:29:56.841765 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-08 00:29:56.841776 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-08 00:29:56.841787 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-08 00:29:56.841798 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-08 00:29:56.841809 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-08 00:29:56.841848 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-08 00:29:56.841860 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-08 00:29:56.841871 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-08 00:29:56.841882 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-08 00:29:56.841893 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-08 00:29:56.841904 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:29:56.841916 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-08 00:29:56.841927 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-08 00:29:56.841938 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-08 00:29:56.841948 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-08 00:29:56.841959 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-08 00:29:56.841970 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-08 00:29:56.841981 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-08 00:29:56.841994 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-08 00:29:56.842007 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-08 00:29:56.842073 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-08 00:29:56.842085 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:29:56.842135 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-08 00:29:56.842149 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-08 00:29:56.842163 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-08 00:29:56.842175 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-08 00:29:56.842188 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-08 00:29:56.842200 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:29:56.842212 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-08 00:29:56.842225 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-08 00:29:56.842238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-08 00:29:56.842250 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-08 00:29:56.842262 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:29:56.842275 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-08 00:29:56.842288 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-08 00:29:56.842300 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-08 00:29:56.842313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-08 00:29:56.842346 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-08 00:29:56.842381 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-08 00:29:56.842403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-08 00:29:56.842415 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-08 00:29:56.842426 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-08 00:29:56.842437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-08 00:29:56.842448 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-08 00:29:56.842544 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-08 00:29:56.842555 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-08 00:29:56.842566 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-08 00:29:56.842577 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-08 00:29:56.842588 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-08 00:29:56.842599 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-08 00:29:56.842610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-08 00:29:56.842621 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-08 00:29:56.842632 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-08 00:29:56.842643 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-08 00:29:56.842654 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-08 00:29:56.842665 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-08 00:29:56.842675 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-08 00:29:56.842686 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-08 00:29:56.842697 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-08 00:29:56.842708 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-08 00:29:56.842719 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-08 00:29:56.842730 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-08 00:29:56.842741 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-08 00:29:56.842752 | orchestrator | 2026-01-08 00:29:56.842764 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-08 00:29:56.842775 | orchestrator | Thursday 08 January 2026 00:29:54 +0000 (0:00:06.880) 0:03:50.964 ****** 2026-01-08 00:29:56.842786 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-08 00:29:56.842797 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-08 00:29:56.842808 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-08 00:29:56.842818 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-08 00:29:56.842829 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-08 00:29:56.842840 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-08 00:29:56.842851 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-08 00:29:56.842871 | orchestrator | 2026-01-08 00:29:56.842882 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-08 00:29:56.842893 | orchestrator | Thursday 08 January 2026 00:29:56 +0000 (0:00:01.523) 0:03:52.487 ****** 2026-01-08 00:29:56.842904 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-08 00:29:56.842915 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:29:56.842926 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-08 00:29:56.842937 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:29:56.842948 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-08 00:29:56.842959 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:29:56.842970 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-08 00:29:56.842981 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:29:56.842998 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-08 00:29:56.843009 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-08 00:29:56.843029 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-08 00:30:10.183220 | orchestrator | 2026-01-08 00:30:10.183357 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-08 00:30:10.183373 | orchestrator | Thursday 08 January 2026 00:29:56 +0000 (0:00:00.500) 0:03:52.987 ****** 2026-01-08 00:30:10.183383 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-08 00:30:10.183393 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-08 00:30:10.183402 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:30:10.183411 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-08 00:30:10.183419 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:30:10.183428 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:30:10.183436 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-08 00:30:10.183444 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:30:10.183556 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-08 00:30:10.183566 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-08 00:30:10.183574 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-08 00:30:10.183583 | orchestrator | 2026-01-08 00:30:10.183597 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-08 00:30:10.183611 | orchestrator | Thursday 08 January 2026 00:29:57 +0000 (0:00:00.590) 0:03:53.578 ****** 2026-01-08 00:30:10.183624 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-08 00:30:10.183637 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:30:10.183650 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-08 00:30:10.183663 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-08 00:30:10.183678 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:30:10.183691 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:30:10.183701 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-08 00:30:10.183714 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:30:10.183728 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-08 00:30:10.183769 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-08 00:30:10.183784 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-08 00:30:10.183797 | orchestrator | 2026-01-08 00:30:10.183810 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-08 00:30:10.183823 | orchestrator | Thursday 08 January 2026 00:29:57 +0000 (0:00:00.573) 0:03:54.151 ****** 2026-01-08 00:30:10.183836 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:30:10.183848 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:30:10.183860 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:30:10.183874 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:30:10.183886 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:30:10.183901 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:30:10.183914 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:30:10.183926 | orchestrator | 2026-01-08 00:30:10.183935 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-08 00:30:10.183944 | orchestrator | Thursday 08 January 2026 00:29:58 +0000 (0:00:00.316) 0:03:54.468 ****** 2026-01-08 00:30:10.183957 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:30:10.183972 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:30:10.183986 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:30:10.183999 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:30:10.184012 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:30:10.184025 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:30:10.184033 | orchestrator | ok: [testbed-manager] 2026-01-08 00:30:10.184041 | orchestrator | 2026-01-08 00:30:10.184049 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-08 00:30:10.184057 | orchestrator | Thursday 08 January 2026 00:30:03 +0000 (0:00:05.624) 0:04:00.092 ****** 2026-01-08 00:30:10.184071 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-08 00:30:10.184084 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:30:10.184098 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-08 00:30:10.184112 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:30:10.184125 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-08 00:30:10.184139 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:30:10.184147 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-08 00:30:10.184156 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-08 00:30:10.184164 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:30:10.184172 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-08 00:30:10.184180 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:30:10.184188 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:30:10.184196 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-08 00:30:10.184203 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:30:10.184211 | orchestrator | 2026-01-08 00:30:10.184219 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-08 00:30:10.184228 | orchestrator | Thursday 08 January 2026 00:30:04 +0000 (0:00:00.309) 0:04:00.402 ****** 2026-01-08 00:30:10.184236 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-08 00:30:10.184244 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-08 00:30:10.184252 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-08 00:30:10.184280 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-08 00:30:10.184288 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-08 00:30:10.184296 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-08 00:30:10.184304 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-08 00:30:10.184312 | orchestrator | 2026-01-08 00:30:10.184320 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-08 00:30:10.184328 | orchestrator | Thursday 08 January 2026 00:30:05 +0000 (0:00:01.213) 0:04:01.615 ****** 2026-01-08 00:30:10.184346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:30:10.184356 | orchestrator | 2026-01-08 00:30:10.184364 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-08 00:30:10.184372 | orchestrator | Thursday 08 January 2026 00:30:05 +0000 (0:00:00.424) 0:04:02.039 ****** 2026-01-08 00:30:10.184385 | orchestrator | ok: [testbed-manager] 2026-01-08 00:30:10.184398 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:30:10.184409 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:30:10.184420 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:30:10.184433 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:30:10.184473 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:30:10.184490 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:30:10.184498 | orchestrator | 2026-01-08 00:30:10.184506 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-08 00:30:10.184514 | orchestrator | Thursday 08 January 2026 00:30:07 +0000 (0:00:01.285) 0:04:03.325 ****** 2026-01-08 00:30:10.184522 | orchestrator | ok: [testbed-manager] 2026-01-08 00:30:10.184530 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:30:10.184537 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:30:10.184545 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:30:10.184552 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:30:10.184560 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:30:10.184568 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:30:10.184576 | orchestrator | 2026-01-08 00:30:10.184584 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-08 00:30:10.184591 | orchestrator | Thursday 08 January 2026 00:30:07 +0000 (0:00:00.631) 0:04:03.956 ****** 2026-01-08 00:30:10.184599 | orchestrator | changed: [testbed-manager] 2026-01-08 00:30:10.184607 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:30:10.184615 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:30:10.184623 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:30:10.184630 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:30:10.184638 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:30:10.184646 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:30:10.184654 | orchestrator | 2026-01-08 00:30:10.184661 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-08 00:30:10.184669 | orchestrator | Thursday 08 January 2026 00:30:08 +0000 (0:00:00.675) 0:04:04.631 ****** 2026-01-08 00:30:10.184695 | orchestrator | ok: [testbed-manager] 2026-01-08 00:30:10.184703 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:30:10.184711 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:30:10.184719 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:30:10.184727 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:30:10.184735 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:30:10.184743 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:30:10.184751 | orchestrator | 2026-01-08 00:30:10.184759 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-08 00:30:10.184767 | orchestrator | Thursday 08 January 2026 00:30:09 +0000 (0:00:00.635) 0:04:05.267 ****** 2026-01-08 00:30:10.184779 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767830721.401683, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:10.184789 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767830710.0765572, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:10.184808 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767830713.6288118, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:10.184835 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767830729.5417087, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.308880 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767830719.6188705, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.308994 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767830765.2661502, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.309007 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1767830769.9550512, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.309015 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.309022 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.309050 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.309072 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.309095 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.309104 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.309111 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-08 00:30:15.309119 | orchestrator | 2026-01-08 00:30:15.309138 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-08 00:30:15.309147 | orchestrator | Thursday 08 January 2026 00:30:10 +0000 (0:00:01.063) 0:04:06.330 ****** 2026-01-08 00:30:15.309155 | orchestrator | changed: [testbed-manager] 2026-01-08 00:30:15.309171 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:30:15.309179 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:30:15.309186 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:30:15.309193 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:30:15.309200 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:30:15.309207 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:30:15.309215 | orchestrator | 2026-01-08 00:30:15.309222 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-08 00:30:15.309236 | orchestrator | Thursday 08 January 2026 00:30:11 +0000 (0:00:01.192) 0:04:07.522 ****** 2026-01-08 00:30:15.309243 | orchestrator | changed: [testbed-manager] 2026-01-08 00:30:15.309250 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:30:15.309257 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:30:15.309264 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:30:15.309272 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:30:15.309279 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:30:15.309286 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:30:15.309293 | orchestrator | 2026-01-08 00:30:15.309300 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-08 00:30:15.309307 | orchestrator | Thursday 08 January 2026 00:30:12 +0000 (0:00:01.301) 0:04:08.823 ****** 2026-01-08 00:30:15.309314 | orchestrator | changed: [testbed-manager] 2026-01-08 00:30:15.309322 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:30:15.309329 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:30:15.309336 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:30:15.309343 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:30:15.309350 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:30:15.309357 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:30:15.309364 | orchestrator | 2026-01-08 00:30:15.309372 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-08 00:30:15.309379 | orchestrator | Thursday 08 January 2026 00:30:13 +0000 (0:00:01.126) 0:04:09.950 ****** 2026-01-08 00:30:15.309386 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:30:15.309393 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:30:15.309401 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:30:15.309408 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:30:15.309415 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:30:15.309423 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:30:15.309432 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:30:15.309440 | orchestrator | 2026-01-08 00:30:15.309469 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-08 00:30:15.309478 | orchestrator | Thursday 08 January 2026 00:30:14 +0000 (0:00:00.297) 0:04:10.247 ****** 2026-01-08 00:30:15.309492 | orchestrator | ok: [testbed-manager] 2026-01-08 00:30:15.309502 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:30:15.309510 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:30:15.309519 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:30:15.309527 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:30:15.309536 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:30:15.309544 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:30:15.309552 | orchestrator | 2026-01-08 00:30:15.309561 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-08 00:30:15.309569 | orchestrator | Thursday 08 January 2026 00:30:14 +0000 (0:00:00.803) 0:04:11.051 ****** 2026-01-08 00:30:15.309578 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:30:15.309589 | orchestrator | 2026-01-08 00:30:15.309598 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-08 00:30:15.309612 | orchestrator | Thursday 08 January 2026 00:30:15 +0000 (0:00:00.405) 0:04:11.457 ****** 2026-01-08 00:31:35.203410 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:35.203542 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:31:35.203552 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:31:35.203559 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:31:35.203564 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:31:35.203570 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:31:35.203576 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:31:35.203582 | orchestrator | 2026-01-08 00:31:35.203589 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-08 00:31:35.203612 | orchestrator | Thursday 08 January 2026 00:30:23 +0000 (0:00:08.695) 0:04:20.152 ****** 2026-01-08 00:31:35.203618 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:35.203624 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:35.203630 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:35.203635 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:35.203641 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:35.203646 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:35.203652 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:35.203657 | orchestrator | 2026-01-08 00:31:35.203663 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-08 00:31:35.203669 | orchestrator | Thursday 08 January 2026 00:30:25 +0000 (0:00:01.333) 0:04:21.486 ****** 2026-01-08 00:31:35.203675 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:35.203680 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:35.203686 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:35.203691 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:35.203696 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:35.203702 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:35.203707 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:35.203713 | orchestrator | 2026-01-08 00:31:35.203718 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-08 00:31:35.203724 | orchestrator | Thursday 08 January 2026 00:30:26 +0000 (0:00:01.144) 0:04:22.631 ****** 2026-01-08 00:31:35.203729 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:35.203735 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:35.203740 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:35.203745 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:35.203751 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:35.203756 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:35.203762 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:35.203767 | orchestrator | 2026-01-08 00:31:35.203773 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-08 00:31:35.203779 | orchestrator | Thursday 08 January 2026 00:30:26 +0000 (0:00:00.292) 0:04:22.923 ****** 2026-01-08 00:31:35.203785 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:35.203790 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:35.203796 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:35.203801 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:35.203806 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:35.203812 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:35.203817 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:35.203823 | orchestrator | 2026-01-08 00:31:35.203828 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-08 00:31:35.203834 | orchestrator | Thursday 08 January 2026 00:30:27 +0000 (0:00:00.321) 0:04:23.245 ****** 2026-01-08 00:31:35.203839 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:35.203845 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:35.203850 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:35.203856 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:35.203861 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:35.203866 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:35.203872 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:35.203877 | orchestrator | 2026-01-08 00:31:35.203883 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-08 00:31:35.203888 | orchestrator | Thursday 08 January 2026 00:30:27 +0000 (0:00:00.319) 0:04:23.564 ****** 2026-01-08 00:31:35.203894 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:35.203899 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:35.203905 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:35.203910 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:35.203915 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:35.203921 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:35.203926 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:35.203932 | orchestrator | 2026-01-08 00:31:35.203938 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-08 00:31:35.203948 | orchestrator | Thursday 08 January 2026 00:30:32 +0000 (0:00:05.154) 0:04:28.719 ****** 2026-01-08 00:31:35.203955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:31:35.203962 | orchestrator | 2026-01-08 00:31:35.203968 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-08 00:31:35.203974 | orchestrator | Thursday 08 January 2026 00:30:32 +0000 (0:00:00.426) 0:04:29.145 ****** 2026-01-08 00:31:35.203981 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-08 00:31:35.203987 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-08 00:31:35.203994 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-08 00:31:35.204000 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-08 00:31:35.204006 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:31:35.204013 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-08 00:31:35.204019 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:31:35.204026 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-08 00:31:35.204032 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-08 00:31:35.204038 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-08 00:31:35.204044 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:31:35.204051 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-08 00:31:35.204057 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-08 00:31:35.204064 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:31:35.204070 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-08 00:31:35.204076 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-08 00:31:35.204114 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:31:35.204121 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:31:35.204127 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-08 00:31:35.204134 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-08 00:31:35.204140 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:31:35.204146 | orchestrator | 2026-01-08 00:31:35.204153 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-08 00:31:35.204159 | orchestrator | Thursday 08 January 2026 00:30:33 +0000 (0:00:00.356) 0:04:29.501 ****** 2026-01-08 00:31:35.204166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:31:35.204172 | orchestrator | 2026-01-08 00:31:35.204179 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-08 00:31:35.204185 | orchestrator | Thursday 08 January 2026 00:30:33 +0000 (0:00:00.426) 0:04:29.928 ****** 2026-01-08 00:31:35.204191 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-08 00:31:35.204199 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:31:35.204209 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-08 00:31:35.204218 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-08 00:31:35.204227 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:31:35.204236 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:31:35.204244 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-08 00:31:35.204252 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-08 00:31:35.204261 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:31:35.204270 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-08 00:31:35.204278 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:31:35.204296 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:31:35.204303 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-08 00:31:35.204309 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:31:35.204314 | orchestrator | 2026-01-08 00:31:35.204319 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-08 00:31:35.204325 | orchestrator | Thursday 08 January 2026 00:30:34 +0000 (0:00:00.317) 0:04:30.245 ****** 2026-01-08 00:31:35.204346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:31:35.204352 | orchestrator | 2026-01-08 00:31:35.204357 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-08 00:31:35.204362 | orchestrator | Thursday 08 January 2026 00:30:34 +0000 (0:00:00.437) 0:04:30.682 ****** 2026-01-08 00:31:35.204368 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:31:35.204373 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:31:35.204379 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:31:35.204384 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:31:35.204390 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:31:35.204395 | orchestrator | changed: [testbed-manager] 2026-01-08 00:31:35.204401 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:31:35.204406 | orchestrator | 2026-01-08 00:31:35.204411 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-08 00:31:35.204417 | orchestrator | Thursday 08 January 2026 00:31:08 +0000 (0:00:34.140) 0:05:04.823 ****** 2026-01-08 00:31:35.204422 | orchestrator | changed: [testbed-manager] 2026-01-08 00:31:35.204446 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:31:35.204452 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:31:35.204457 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:31:35.204462 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:31:35.204468 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:31:35.204473 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:31:35.204479 | orchestrator | 2026-01-08 00:31:35.204484 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-08 00:31:35.204489 | orchestrator | Thursday 08 January 2026 00:31:18 +0000 (0:00:09.781) 0:05:14.604 ****** 2026-01-08 00:31:35.204495 | orchestrator | changed: [testbed-manager] 2026-01-08 00:31:35.204500 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:31:35.204506 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:31:35.204511 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:31:35.204517 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:31:35.204522 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:31:35.204527 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:31:35.204533 | orchestrator | 2026-01-08 00:31:35.204542 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-08 00:31:35.204548 | orchestrator | Thursday 08 January 2026 00:31:26 +0000 (0:00:08.121) 0:05:22.726 ****** 2026-01-08 00:31:35.204553 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:35.204559 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:35.204564 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:35.204570 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:35.204575 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:35.204580 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:35.204586 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:35.204591 | orchestrator | 2026-01-08 00:31:35.204597 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-08 00:31:35.204602 | orchestrator | Thursday 08 January 2026 00:31:28 +0000 (0:00:02.160) 0:05:24.886 ****** 2026-01-08 00:31:35.204608 | orchestrator | changed: [testbed-manager] 2026-01-08 00:31:35.204613 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:31:35.204619 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:31:35.204624 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:31:35.204634 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:31:35.204639 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:31:35.204645 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:31:35.204650 | orchestrator | 2026-01-08 00:31:35.204660 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-08 00:31:46.658077 | orchestrator | Thursday 08 January 2026 00:31:35 +0000 (0:00:06.451) 0:05:31.337 ****** 2026-01-08 00:31:46.658191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:31:46.658212 | orchestrator | 2026-01-08 00:31:46.658226 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-08 00:31:46.658238 | orchestrator | Thursday 08 January 2026 00:31:35 +0000 (0:00:00.448) 0:05:31.785 ****** 2026-01-08 00:31:46.658250 | orchestrator | changed: [testbed-manager] 2026-01-08 00:31:46.658263 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:31:46.658274 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:31:46.658285 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:31:46.658296 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:31:46.658307 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:31:46.658318 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:31:46.658329 | orchestrator | 2026-01-08 00:31:46.658341 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-08 00:31:46.658352 | orchestrator | Thursday 08 January 2026 00:31:36 +0000 (0:00:00.731) 0:05:32.517 ****** 2026-01-08 00:31:46.658363 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:46.658375 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:46.658386 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:46.658397 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:46.658408 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:46.658419 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:46.658481 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:46.658493 | orchestrator | 2026-01-08 00:31:46.658504 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-08 00:31:46.658515 | orchestrator | Thursday 08 January 2026 00:31:38 +0000 (0:00:01.738) 0:05:34.256 ****** 2026-01-08 00:31:46.658526 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:31:46.658537 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:31:46.658550 | orchestrator | changed: [testbed-manager] 2026-01-08 00:31:46.658562 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:31:46.658574 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:31:46.658586 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:31:46.658598 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:31:46.658610 | orchestrator | 2026-01-08 00:31:46.658623 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-08 00:31:46.658636 | orchestrator | Thursday 08 January 2026 00:31:38 +0000 (0:00:00.783) 0:05:35.039 ****** 2026-01-08 00:31:46.658649 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:31:46.658661 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:31:46.658674 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:31:46.658687 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:31:46.658700 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:31:46.658712 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:31:46.658724 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:31:46.658737 | orchestrator | 2026-01-08 00:31:46.658749 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-08 00:31:46.658762 | orchestrator | Thursday 08 January 2026 00:31:39 +0000 (0:00:00.292) 0:05:35.331 ****** 2026-01-08 00:31:46.658774 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:31:46.658786 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:31:46.658798 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:31:46.658811 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:31:46.658848 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:31:46.658861 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:31:46.658874 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:31:46.658886 | orchestrator | 2026-01-08 00:31:46.658897 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-08 00:31:46.658908 | orchestrator | Thursday 08 January 2026 00:31:39 +0000 (0:00:00.409) 0:05:35.741 ****** 2026-01-08 00:31:46.658919 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:46.658930 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:46.658941 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:46.658952 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:46.658963 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:46.658974 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:46.658985 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:46.658996 | orchestrator | 2026-01-08 00:31:46.659007 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-08 00:31:46.659018 | orchestrator | Thursday 08 January 2026 00:31:39 +0000 (0:00:00.350) 0:05:36.091 ****** 2026-01-08 00:31:46.659029 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:31:46.659040 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:31:46.659051 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:31:46.659062 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:31:46.659073 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:31:46.659084 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:31:46.659095 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:31:46.659106 | orchestrator | 2026-01-08 00:31:46.659132 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-08 00:31:46.659144 | orchestrator | Thursday 08 January 2026 00:31:40 +0000 (0:00:00.294) 0:05:36.386 ****** 2026-01-08 00:31:46.659155 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:46.659166 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:46.659177 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:46.659188 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:46.659199 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:46.659210 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:46.659221 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:46.659231 | orchestrator | 2026-01-08 00:31:46.659242 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-08 00:31:46.659254 | orchestrator | Thursday 08 January 2026 00:31:40 +0000 (0:00:00.316) 0:05:36.702 ****** 2026-01-08 00:31:46.659264 | orchestrator | ok: [testbed-manager] =>  2026-01-08 00:31:46.659275 | orchestrator |  docker_version: 5:27.5.1 2026-01-08 00:31:46.659286 | orchestrator | ok: [testbed-node-3] =>  2026-01-08 00:31:46.659297 | orchestrator |  docker_version: 5:27.5.1 2026-01-08 00:31:46.659308 | orchestrator | ok: [testbed-node-4] =>  2026-01-08 00:31:46.659319 | orchestrator |  docker_version: 5:27.5.1 2026-01-08 00:31:46.659330 | orchestrator | ok: [testbed-node-5] =>  2026-01-08 00:31:46.659341 | orchestrator |  docker_version: 5:27.5.1 2026-01-08 00:31:46.659370 | orchestrator | ok: [testbed-node-0] =>  2026-01-08 00:31:46.659382 | orchestrator |  docker_version: 5:27.5.1 2026-01-08 00:31:46.659393 | orchestrator | ok: [testbed-node-1] =>  2026-01-08 00:31:46.659404 | orchestrator |  docker_version: 5:27.5.1 2026-01-08 00:31:46.659415 | orchestrator | ok: [testbed-node-2] =>  2026-01-08 00:31:46.659442 | orchestrator |  docker_version: 5:27.5.1 2026-01-08 00:31:46.659454 | orchestrator | 2026-01-08 00:31:46.659465 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-08 00:31:46.659476 | orchestrator | Thursday 08 January 2026 00:31:40 +0000 (0:00:00.283) 0:05:36.986 ****** 2026-01-08 00:31:46.659487 | orchestrator | ok: [testbed-manager] =>  2026-01-08 00:31:46.659498 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-08 00:31:46.659509 | orchestrator | ok: [testbed-node-3] =>  2026-01-08 00:31:46.659520 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-08 00:31:46.659531 | orchestrator | ok: [testbed-node-4] =>  2026-01-08 00:31:46.659542 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-08 00:31:46.659561 | orchestrator | ok: [testbed-node-5] =>  2026-01-08 00:31:46.659572 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-08 00:31:46.659583 | orchestrator | ok: [testbed-node-0] =>  2026-01-08 00:31:46.659594 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-08 00:31:46.659605 | orchestrator | ok: [testbed-node-1] =>  2026-01-08 00:31:46.659616 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-08 00:31:46.659626 | orchestrator | ok: [testbed-node-2] =>  2026-01-08 00:31:46.659637 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-08 00:31:46.659648 | orchestrator | 2026-01-08 00:31:46.659659 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-08 00:31:46.659670 | orchestrator | Thursday 08 January 2026 00:31:41 +0000 (0:00:00.346) 0:05:37.332 ****** 2026-01-08 00:31:46.659681 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:31:46.659692 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:31:46.659703 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:31:46.659713 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:31:46.659724 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:31:46.659735 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:31:46.659746 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:31:46.659757 | orchestrator | 2026-01-08 00:31:46.659768 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-08 00:31:46.659779 | orchestrator | Thursday 08 January 2026 00:31:41 +0000 (0:00:00.286) 0:05:37.618 ****** 2026-01-08 00:31:46.659790 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:31:46.659801 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:31:46.659811 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:31:46.659822 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:31:46.659833 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:31:46.659844 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:31:46.659855 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:31:46.659866 | orchestrator | 2026-01-08 00:31:46.659877 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-08 00:31:46.659888 | orchestrator | Thursday 08 January 2026 00:31:41 +0000 (0:00:00.290) 0:05:37.908 ****** 2026-01-08 00:31:46.659900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:31:46.659913 | orchestrator | 2026-01-08 00:31:46.659924 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-08 00:31:46.659935 | orchestrator | Thursday 08 January 2026 00:31:42 +0000 (0:00:00.439) 0:05:38.348 ****** 2026-01-08 00:31:46.659946 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:46.659957 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:46.659968 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:46.659979 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:46.659990 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:46.660001 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:46.660012 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:46.660023 | orchestrator | 2026-01-08 00:31:46.660034 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-08 00:31:46.660045 | orchestrator | Thursday 08 January 2026 00:31:43 +0000 (0:00:01.012) 0:05:39.361 ****** 2026-01-08 00:31:46.660055 | orchestrator | ok: [testbed-manager] 2026-01-08 00:31:46.660066 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:31:46.660077 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:31:46.660088 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:31:46.660098 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:31:46.660109 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:31:46.660120 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:31:46.660131 | orchestrator | 2026-01-08 00:31:46.660142 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-08 00:31:46.660154 | orchestrator | Thursday 08 January 2026 00:31:46 +0000 (0:00:02.989) 0:05:42.350 ****** 2026-01-08 00:31:46.660171 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-08 00:31:46.660183 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-08 00:31:46.660199 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-08 00:31:46.660210 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-08 00:31:46.660221 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-08 00:31:46.660232 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-08 00:31:46.660243 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:31:46.660254 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-08 00:31:46.660265 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-08 00:31:46.660276 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:31:46.660287 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-08 00:31:46.660298 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-08 00:31:46.660309 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-08 00:31:46.660320 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-08 00:31:46.660331 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:31:46.660342 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-08 00:31:46.660359 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-08 00:32:49.351188 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-08 00:32:49.351274 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:32:49.351283 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-08 00:32:49.351289 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-08 00:32:49.351294 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-08 00:32:49.351299 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:32:49.351304 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:32:49.351309 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-08 00:32:49.351314 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-08 00:32:49.351318 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-08 00:32:49.351323 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:32:49.351328 | orchestrator | 2026-01-08 00:32:49.351334 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-08 00:32:49.351340 | orchestrator | Thursday 08 January 2026 00:31:46 +0000 (0:00:00.664) 0:05:43.015 ****** 2026-01-08 00:32:49.351345 | orchestrator | ok: [testbed-manager] 2026-01-08 00:32:49.351349 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.351354 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.351359 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.351363 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.351368 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.351372 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.351377 | orchestrator | 2026-01-08 00:32:49.351382 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-08 00:32:49.351387 | orchestrator | Thursday 08 January 2026 00:31:54 +0000 (0:00:07.450) 0:05:50.466 ****** 2026-01-08 00:32:49.351391 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.351441 | orchestrator | ok: [testbed-manager] 2026-01-08 00:32:49.351449 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.351465 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.351478 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.351482 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.351487 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.351492 | orchestrator | 2026-01-08 00:32:49.351496 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-08 00:32:49.351501 | orchestrator | Thursday 08 January 2026 00:31:55 +0000 (0:00:01.121) 0:05:51.588 ****** 2026-01-08 00:32:49.351533 | orchestrator | ok: [testbed-manager] 2026-01-08 00:32:49.351538 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.351543 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.351548 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.351552 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.351557 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.351561 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.351566 | orchestrator | 2026-01-08 00:32:49.351571 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-08 00:32:49.351575 | orchestrator | Thursday 08 January 2026 00:32:03 +0000 (0:00:08.024) 0:05:59.613 ****** 2026-01-08 00:32:49.351580 | orchestrator | changed: [testbed-manager] 2026-01-08 00:32:49.351591 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.351597 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.351601 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.351606 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.351611 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.351615 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.351620 | orchestrator | 2026-01-08 00:32:49.351625 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-08 00:32:49.351629 | orchestrator | Thursday 08 January 2026 00:32:07 +0000 (0:00:03.661) 0:06:03.274 ****** 2026-01-08 00:32:49.351635 | orchestrator | ok: [testbed-manager] 2026-01-08 00:32:49.351642 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.351649 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.351656 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.351663 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.351670 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.351677 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.351685 | orchestrator | 2026-01-08 00:32:49.351692 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-08 00:32:49.351699 | orchestrator | Thursday 08 January 2026 00:32:08 +0000 (0:00:01.384) 0:06:04.659 ****** 2026-01-08 00:32:49.351706 | orchestrator | ok: [testbed-manager] 2026-01-08 00:32:49.351714 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.351719 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.351724 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.351728 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.351734 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.351739 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.351744 | orchestrator | 2026-01-08 00:32:49.351749 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-08 00:32:49.351766 | orchestrator | Thursday 08 January 2026 00:32:10 +0000 (0:00:01.645) 0:06:06.304 ****** 2026-01-08 00:32:49.351772 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:32:49.351777 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:32:49.351782 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:32:49.351788 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:32:49.351793 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:32:49.351798 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:32:49.351804 | orchestrator | changed: [testbed-manager] 2026-01-08 00:32:49.351809 | orchestrator | 2026-01-08 00:32:49.351814 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-08 00:32:49.351819 | orchestrator | Thursday 08 January 2026 00:32:10 +0000 (0:00:00.649) 0:06:06.954 ****** 2026-01-08 00:32:49.351824 | orchestrator | ok: [testbed-manager] 2026-01-08 00:32:49.351829 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.351835 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.351840 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.351845 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.351850 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.351855 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.351861 | orchestrator | 2026-01-08 00:32:49.351866 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-08 00:32:49.351889 | orchestrator | Thursday 08 January 2026 00:32:20 +0000 (0:00:09.980) 0:06:16.934 ****** 2026-01-08 00:32:49.351895 | orchestrator | changed: [testbed-manager] 2026-01-08 00:32:49.351900 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.351906 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.351911 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.351916 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.351921 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.351926 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.351932 | orchestrator | 2026-01-08 00:32:49.351937 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-08 00:32:49.351942 | orchestrator | Thursday 08 January 2026 00:32:21 +0000 (0:00:00.948) 0:06:17.883 ****** 2026-01-08 00:32:49.351948 | orchestrator | ok: [testbed-manager] 2026-01-08 00:32:49.351953 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.351958 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.351964 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.351969 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.351975 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.351980 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.351985 | orchestrator | 2026-01-08 00:32:49.351990 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-08 00:32:49.351996 | orchestrator | Thursday 08 January 2026 00:32:31 +0000 (0:00:09.636) 0:06:27.519 ****** 2026-01-08 00:32:49.352000 | orchestrator | ok: [testbed-manager] 2026-01-08 00:32:49.352005 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.352009 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.352014 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.352018 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.352023 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.352027 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.352032 | orchestrator | 2026-01-08 00:32:49.352036 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-08 00:32:49.352041 | orchestrator | Thursday 08 January 2026 00:32:42 +0000 (0:00:10.941) 0:06:38.461 ****** 2026-01-08 00:32:49.352046 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-08 00:32:49.352051 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-08 00:32:49.352055 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-08 00:32:49.352060 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-08 00:32:49.352064 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-08 00:32:49.352069 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-08 00:32:49.352073 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-08 00:32:49.352078 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-08 00:32:49.352083 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-08 00:32:49.352087 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-08 00:32:49.352092 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-08 00:32:49.352096 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-08 00:32:49.352101 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-08 00:32:49.352105 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-08 00:32:49.352110 | orchestrator | 2026-01-08 00:32:49.352114 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-08 00:32:49.352119 | orchestrator | Thursday 08 January 2026 00:32:43 +0000 (0:00:01.397) 0:06:39.859 ****** 2026-01-08 00:32:49.352124 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:32:49.352128 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:32:49.352133 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:32:49.352137 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:32:49.352142 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:32:49.352146 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:32:49.352155 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:32:49.352159 | orchestrator | 2026-01-08 00:32:49.352164 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-08 00:32:49.352169 | orchestrator | Thursday 08 January 2026 00:32:44 +0000 (0:00:00.525) 0:06:40.384 ****** 2026-01-08 00:32:49.352173 | orchestrator | ok: [testbed-manager] 2026-01-08 00:32:49.352178 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:32:49.352182 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:32:49.352187 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:32:49.352191 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:32:49.352196 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:32:49.352200 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:32:49.352205 | orchestrator | 2026-01-08 00:32:49.352209 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-08 00:32:49.352215 | orchestrator | Thursday 08 January 2026 00:32:48 +0000 (0:00:04.098) 0:06:44.482 ****** 2026-01-08 00:32:49.352220 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:32:49.352224 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:32:49.352229 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:32:49.352233 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:32:49.352238 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:32:49.352243 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:32:49.352247 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:32:49.352252 | orchestrator | 2026-01-08 00:32:49.352258 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-08 00:32:49.352263 | orchestrator | Thursday 08 January 2026 00:32:48 +0000 (0:00:00.534) 0:06:45.017 ****** 2026-01-08 00:32:49.352267 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-08 00:32:49.352272 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-08 00:32:49.352277 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:32:49.352282 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-08 00:32:49.352286 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-08 00:32:49.352291 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:32:49.352295 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-08 00:32:49.352300 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-08 00:32:49.352304 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:32:49.352312 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-08 00:33:09.420080 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-08 00:33:09.420183 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:33:09.420194 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-08 00:33:09.420248 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-08 00:33:09.420256 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:33:09.420263 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-08 00:33:09.420270 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-08 00:33:09.420277 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:33:09.420284 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-08 00:33:09.420291 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-08 00:33:09.420297 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:33:09.420304 | orchestrator | 2026-01-08 00:33:09.420313 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-08 00:33:09.420322 | orchestrator | Thursday 08 January 2026 00:32:49 +0000 (0:00:00.744) 0:06:45.761 ****** 2026-01-08 00:33:09.420329 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:33:09.420336 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:33:09.420343 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:33:09.420350 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:33:09.420407 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:33:09.420415 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:33:09.420421 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:33:09.420428 | orchestrator | 2026-01-08 00:33:09.420435 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-08 00:33:09.420442 | orchestrator | Thursday 08 January 2026 00:32:50 +0000 (0:00:00.524) 0:06:46.286 ****** 2026-01-08 00:33:09.420449 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:33:09.420456 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:33:09.420462 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:33:09.420469 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:33:09.420476 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:33:09.420483 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:33:09.420489 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:33:09.420496 | orchestrator | 2026-01-08 00:33:09.420503 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-08 00:33:09.420509 | orchestrator | Thursday 08 January 2026 00:32:50 +0000 (0:00:00.541) 0:06:46.828 ****** 2026-01-08 00:33:09.420515 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:33:09.420521 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:33:09.420527 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:33:09.420532 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:33:09.420538 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:33:09.420545 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:33:09.420551 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:33:09.420557 | orchestrator | 2026-01-08 00:33:09.420564 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-08 00:33:09.420570 | orchestrator | Thursday 08 January 2026 00:32:51 +0000 (0:00:00.545) 0:06:47.374 ****** 2026-01-08 00:33:09.420576 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:09.420583 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:09.420590 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:09.420596 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:09.420603 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:09.420609 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:09.420615 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:09.420622 | orchestrator | 2026-01-08 00:33:09.420629 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-08 00:33:09.420637 | orchestrator | Thursday 08 January 2026 00:32:53 +0000 (0:00:02.142) 0:06:49.516 ****** 2026-01-08 00:33:09.420646 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:33:09.420655 | orchestrator | 2026-01-08 00:33:09.420662 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-08 00:33:09.420670 | orchestrator | Thursday 08 January 2026 00:32:54 +0000 (0:00:00.863) 0:06:50.380 ****** 2026-01-08 00:33:09.420677 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:09.420683 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:09.420690 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:09.420697 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:09.420704 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:09.420710 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:33:09.420717 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:09.420725 | orchestrator | 2026-01-08 00:33:09.420731 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-08 00:33:09.420744 | orchestrator | Thursday 08 January 2026 00:32:55 +0000 (0:00:00.873) 0:06:51.253 ****** 2026-01-08 00:33:09.420752 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:09.420760 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:09.420767 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:09.420774 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:09.420789 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:09.420796 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:33:09.420803 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:09.420811 | orchestrator | 2026-01-08 00:33:09.420817 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-08 00:33:09.420823 | orchestrator | Thursday 08 January 2026 00:32:55 +0000 (0:00:00.869) 0:06:52.123 ****** 2026-01-08 00:33:09.420829 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:09.420835 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:09.420840 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:09.420846 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:09.420853 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:09.420859 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:33:09.420866 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:09.420873 | orchestrator | 2026-01-08 00:33:09.420881 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-08 00:33:09.420906 | orchestrator | Thursday 08 January 2026 00:32:57 +0000 (0:00:01.600) 0:06:53.723 ****** 2026-01-08 00:33:09.420914 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:33:09.420921 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:09.420928 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:09.420935 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:09.420942 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:09.420949 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:09.420956 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:09.420963 | orchestrator | 2026-01-08 00:33:09.420969 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-08 00:33:09.420976 | orchestrator | Thursday 08 January 2026 00:32:58 +0000 (0:00:01.433) 0:06:55.157 ****** 2026-01-08 00:33:09.420984 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:09.420991 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:09.420999 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:09.421005 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:09.421012 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:33:09.421018 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:09.421024 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:09.421031 | orchestrator | 2026-01-08 00:33:09.421038 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-08 00:33:09.421045 | orchestrator | Thursday 08 January 2026 00:33:00 +0000 (0:00:01.418) 0:06:56.576 ****** 2026-01-08 00:33:09.421051 | orchestrator | changed: [testbed-manager] 2026-01-08 00:33:09.421058 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:09.421064 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:09.421070 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:09.421077 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:09.421083 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:33:09.421089 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:09.421095 | orchestrator | 2026-01-08 00:33:09.421101 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-08 00:33:09.421107 | orchestrator | Thursday 08 January 2026 00:33:01 +0000 (0:00:01.536) 0:06:58.112 ****** 2026-01-08 00:33:09.421113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:33:09.421120 | orchestrator | 2026-01-08 00:33:09.421127 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-08 00:33:09.421133 | orchestrator | Thursday 08 January 2026 00:33:03 +0000 (0:00:01.063) 0:06:59.175 ****** 2026-01-08 00:33:09.421139 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:09.421146 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:09.421152 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:09.421159 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:09.421166 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:09.421183 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:09.421190 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:09.421197 | orchestrator | 2026-01-08 00:33:09.421204 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-08 00:33:09.421211 | orchestrator | Thursday 08 January 2026 00:33:04 +0000 (0:00:01.427) 0:07:00.603 ****** 2026-01-08 00:33:09.421218 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:09.421225 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:09.421232 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:09.421239 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:09.421245 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:09.421253 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:09.421259 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:09.421266 | orchestrator | 2026-01-08 00:33:09.421273 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-08 00:33:09.421280 | orchestrator | Thursday 08 January 2026 00:33:05 +0000 (0:00:01.161) 0:07:01.764 ****** 2026-01-08 00:33:09.421287 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:09.421294 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:09.421301 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:09.421308 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:09.421315 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:09.421322 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:09.421329 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:09.421335 | orchestrator | 2026-01-08 00:33:09.421343 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-08 00:33:09.421350 | orchestrator | Thursday 08 January 2026 00:33:06 +0000 (0:00:01.142) 0:07:02.907 ****** 2026-01-08 00:33:09.421356 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:09.421363 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:09.421370 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:09.421399 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:09.421406 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:09.421412 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:09.421418 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:09.421424 | orchestrator | 2026-01-08 00:33:09.421430 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-08 00:33:09.421437 | orchestrator | Thursday 08 January 2026 00:33:08 +0000 (0:00:01.344) 0:07:04.251 ****** 2026-01-08 00:33:09.421444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:33:09.421451 | orchestrator | 2026-01-08 00:33:09.421458 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-08 00:33:09.421465 | orchestrator | Thursday 08 January 2026 00:33:09 +0000 (0:00:01.003) 0:07:05.255 ****** 2026-01-08 00:33:09.421472 | orchestrator | 2026-01-08 00:33:09.421479 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-08 00:33:09.421486 | orchestrator | Thursday 08 January 2026 00:33:09 +0000 (0:00:00.042) 0:07:05.297 ****** 2026-01-08 00:33:09.421493 | orchestrator | 2026-01-08 00:33:09.421499 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-08 00:33:09.421506 | orchestrator | Thursday 08 January 2026 00:33:09 +0000 (0:00:00.047) 0:07:05.345 ****** 2026-01-08 00:33:09.421512 | orchestrator | 2026-01-08 00:33:09.421518 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-08 00:33:09.421533 | orchestrator | Thursday 08 January 2026 00:33:09 +0000 (0:00:00.040) 0:07:05.385 ****** 2026-01-08 00:33:35.954406 | orchestrator | 2026-01-08 00:33:35.954498 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-08 00:33:35.954510 | orchestrator | Thursday 08 January 2026 00:33:09 +0000 (0:00:00.040) 0:07:05.426 ****** 2026-01-08 00:33:35.954518 | orchestrator | 2026-01-08 00:33:35.954526 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-08 00:33:35.954533 | orchestrator | Thursday 08 January 2026 00:33:09 +0000 (0:00:00.046) 0:07:05.473 ****** 2026-01-08 00:33:35.954559 | orchestrator | 2026-01-08 00:33:35.954567 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-08 00:33:35.954574 | orchestrator | Thursday 08 January 2026 00:33:09 +0000 (0:00:00.040) 0:07:05.513 ****** 2026-01-08 00:33:35.954581 | orchestrator | 2026-01-08 00:33:35.954588 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-08 00:33:35.954596 | orchestrator | Thursday 08 January 2026 00:33:09 +0000 (0:00:00.039) 0:07:05.553 ****** 2026-01-08 00:33:35.954603 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:35.954612 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:35.954619 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:35.954626 | orchestrator | 2026-01-08 00:33:35.954634 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-08 00:33:35.954641 | orchestrator | Thursday 08 January 2026 00:33:10 +0000 (0:00:01.192) 0:07:06.746 ****** 2026-01-08 00:33:35.954648 | orchestrator | changed: [testbed-manager] 2026-01-08 00:33:35.954657 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:35.954664 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:35.954671 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:35.954678 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:35.954685 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:35.954693 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:33:35.954701 | orchestrator | 2026-01-08 00:33:35.954708 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-08 00:33:35.954716 | orchestrator | Thursday 08 January 2026 00:33:12 +0000 (0:00:01.720) 0:07:08.467 ****** 2026-01-08 00:33:35.954723 | orchestrator | changed: [testbed-manager] 2026-01-08 00:33:35.954730 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:35.954737 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:35.954747 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:35.954759 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:33:35.954772 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:35.954783 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:35.954794 | orchestrator | 2026-01-08 00:33:35.954805 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-08 00:33:35.954816 | orchestrator | Thursday 08 January 2026 00:33:13 +0000 (0:00:01.227) 0:07:09.694 ****** 2026-01-08 00:33:35.954826 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:33:35.954836 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:35.954848 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:35.954860 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:35.954873 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:35.954886 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:33:35.954899 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:35.954912 | orchestrator | 2026-01-08 00:33:35.954925 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-08 00:33:35.954937 | orchestrator | Thursday 08 January 2026 00:33:15 +0000 (0:00:02.344) 0:07:12.039 ****** 2026-01-08 00:33:35.954946 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:33:35.954955 | orchestrator | 2026-01-08 00:33:35.954963 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-08 00:33:35.954972 | orchestrator | Thursday 08 January 2026 00:33:15 +0000 (0:00:00.106) 0:07:12.145 ****** 2026-01-08 00:33:35.954982 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:35.954992 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:35.955003 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:35.955014 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:35.955024 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:35.955034 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:33:35.955044 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:35.955055 | orchestrator | 2026-01-08 00:33:35.955065 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-08 00:33:35.955077 | orchestrator | Thursday 08 January 2026 00:33:16 +0000 (0:00:01.007) 0:07:13.153 ****** 2026-01-08 00:33:35.955093 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:33:35.955103 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:33:35.955113 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:33:35.955124 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:33:35.955134 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:33:35.955144 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:33:35.955155 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:33:35.955164 | orchestrator | 2026-01-08 00:33:35.955175 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-08 00:33:35.955198 | orchestrator | Thursday 08 January 2026 00:33:17 +0000 (0:00:00.604) 0:07:13.757 ****** 2026-01-08 00:33:35.955210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:33:35.955223 | orchestrator | 2026-01-08 00:33:35.955234 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-08 00:33:35.955243 | orchestrator | Thursday 08 January 2026 00:33:18 +0000 (0:00:01.044) 0:07:14.802 ****** 2026-01-08 00:33:35.955251 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:35.955260 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:35.955269 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:35.955277 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:35.955286 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:35.955294 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:35.955303 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:35.955312 | orchestrator | 2026-01-08 00:33:35.955320 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-08 00:33:35.955330 | orchestrator | Thursday 08 January 2026 00:33:19 +0000 (0:00:00.847) 0:07:15.649 ****** 2026-01-08 00:33:35.955339 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-08 00:33:35.955387 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-08 00:33:35.955399 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-08 00:33:35.955407 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-08 00:33:35.955416 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-08 00:33:35.955425 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-08 00:33:35.955434 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-08 00:33:35.955443 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-08 00:33:35.955452 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-08 00:33:35.955466 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-08 00:33:35.955479 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-08 00:33:35.955491 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-08 00:33:35.955504 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-08 00:33:35.955518 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-08 00:33:35.955534 | orchestrator | 2026-01-08 00:33:35.955550 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-08 00:33:35.955565 | orchestrator | Thursday 08 January 2026 00:33:21 +0000 (0:00:02.496) 0:07:18.146 ****** 2026-01-08 00:33:35.955576 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:33:35.955585 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:33:35.955593 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:33:35.955602 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:33:35.955610 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:33:35.955619 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:33:35.955627 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:33:35.955636 | orchestrator | 2026-01-08 00:33:35.955644 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-08 00:33:35.955661 | orchestrator | Thursday 08 January 2026 00:33:22 +0000 (0:00:00.746) 0:07:18.892 ****** 2026-01-08 00:33:35.955671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:33:35.955682 | orchestrator | 2026-01-08 00:33:35.955690 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-08 00:33:35.955699 | orchestrator | Thursday 08 January 2026 00:33:23 +0000 (0:00:00.822) 0:07:19.715 ****** 2026-01-08 00:33:35.955708 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:35.955716 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:35.955725 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:35.955734 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:35.955742 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:35.955751 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:35.955759 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:35.955767 | orchestrator | 2026-01-08 00:33:35.955776 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-08 00:33:35.955785 | orchestrator | Thursday 08 January 2026 00:33:24 +0000 (0:00:00.948) 0:07:20.663 ****** 2026-01-08 00:33:35.955793 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:35.955802 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:35.955810 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:35.955819 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:35.955827 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:35.955835 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:35.955844 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:35.955852 | orchestrator | 2026-01-08 00:33:35.955861 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-08 00:33:35.955869 | orchestrator | Thursday 08 January 2026 00:33:25 +0000 (0:00:01.052) 0:07:21.715 ****** 2026-01-08 00:33:35.955879 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:33:35.955893 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:33:35.955911 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:33:35.955933 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:33:35.955946 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:33:35.955959 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:33:35.955973 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:33:35.955986 | orchestrator | 2026-01-08 00:33:35.955999 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-08 00:33:35.956013 | orchestrator | Thursday 08 January 2026 00:33:26 +0000 (0:00:00.517) 0:07:22.232 ****** 2026-01-08 00:33:35.956025 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:35.956040 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:33:35.956053 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:33:35.956066 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:33:35.956079 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:33:35.956092 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:33:35.956115 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:33:35.956129 | orchestrator | 2026-01-08 00:33:35.956144 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-08 00:33:35.956158 | orchestrator | Thursday 08 January 2026 00:33:27 +0000 (0:00:01.547) 0:07:23.780 ****** 2026-01-08 00:33:35.956175 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:33:35.956188 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:33:35.956201 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:33:35.956215 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:33:35.956228 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:33:35.956243 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:33:35.956259 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:33:35.956274 | orchestrator | 2026-01-08 00:33:35.956288 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-08 00:33:35.956302 | orchestrator | Thursday 08 January 2026 00:33:28 +0000 (0:00:00.521) 0:07:24.302 ****** 2026-01-08 00:33:35.956323 | orchestrator | ok: [testbed-manager] 2026-01-08 00:33:35.956332 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:33:35.956340 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:33:35.956349 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:33:35.956440 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:33:35.956454 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:33:35.956474 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:08.602926 | orchestrator | 2026-01-08 00:34:08.603037 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-08 00:34:08.603056 | orchestrator | Thursday 08 January 2026 00:33:35 +0000 (0:00:07.793) 0:07:32.096 ****** 2026-01-08 00:34:08.603070 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.603083 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:08.603095 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:08.603106 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:08.603118 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:08.603129 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:08.603140 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:08.603151 | orchestrator | 2026-01-08 00:34:08.603163 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-08 00:34:08.603174 | orchestrator | Thursday 08 January 2026 00:33:37 +0000 (0:00:01.617) 0:07:33.714 ****** 2026-01-08 00:34:08.603186 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.603197 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:08.603208 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:08.603219 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:08.603229 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:08.603240 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:08.603252 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:08.603263 | orchestrator | 2026-01-08 00:34:08.603274 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-08 00:34:08.603285 | orchestrator | Thursday 08 January 2026 00:33:39 +0000 (0:00:01.762) 0:07:35.476 ****** 2026-01-08 00:34:08.603296 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.603307 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:08.603388 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:08.603400 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:08.603411 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:08.603423 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:08.603434 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:08.603445 | orchestrator | 2026-01-08 00:34:08.603459 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-08 00:34:08.603479 | orchestrator | Thursday 08 January 2026 00:33:41 +0000 (0:00:01.695) 0:07:37.172 ****** 2026-01-08 00:34:08.603498 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.603518 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:08.603538 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:08.603556 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:08.603574 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:08.603592 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:08.603611 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:08.603629 | orchestrator | 2026-01-08 00:34:08.603648 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-08 00:34:08.603667 | orchestrator | Thursday 08 January 2026 00:33:41 +0000 (0:00:00.894) 0:07:38.067 ****** 2026-01-08 00:34:08.603687 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:34:08.603707 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:34:08.603725 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:34:08.603738 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:34:08.603751 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:34:08.603764 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:34:08.603778 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:34:08.603790 | orchestrator | 2026-01-08 00:34:08.603834 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-08 00:34:08.603848 | orchestrator | Thursday 08 January 2026 00:33:42 +0000 (0:00:01.041) 0:07:39.108 ****** 2026-01-08 00:34:08.603862 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:34:08.603873 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:34:08.603884 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:34:08.603894 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:34:08.603905 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:34:08.603916 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:34:08.603927 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:34:08.603937 | orchestrator | 2026-01-08 00:34:08.603948 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-08 00:34:08.603959 | orchestrator | Thursday 08 January 2026 00:33:43 +0000 (0:00:00.525) 0:07:39.634 ****** 2026-01-08 00:34:08.603970 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.603981 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:08.603992 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:08.604002 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:08.604013 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:08.604024 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:08.604034 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:08.604045 | orchestrator | 2026-01-08 00:34:08.604056 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-08 00:34:08.604067 | orchestrator | Thursday 08 January 2026 00:33:43 +0000 (0:00:00.507) 0:07:40.142 ****** 2026-01-08 00:34:08.604078 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.604088 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:08.604099 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:08.604110 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:08.604120 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:08.604131 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:08.604142 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:08.604153 | orchestrator | 2026-01-08 00:34:08.604164 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-08 00:34:08.604174 | orchestrator | Thursday 08 January 2026 00:33:44 +0000 (0:00:00.533) 0:07:40.675 ****** 2026-01-08 00:34:08.604185 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.604196 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:08.604207 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:08.604217 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:08.604228 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:08.604238 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:08.604249 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:08.604259 | orchestrator | 2026-01-08 00:34:08.604270 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-08 00:34:08.604336 | orchestrator | Thursday 08 January 2026 00:33:45 +0000 (0:00:00.694) 0:07:41.370 ****** 2026-01-08 00:34:08.604356 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.604374 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:08.604391 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:08.604407 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:08.604424 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:08.604442 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:08.604462 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:08.604479 | orchestrator | 2026-01-08 00:34:08.604518 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-08 00:34:08.604531 | orchestrator | Thursday 08 January 2026 00:33:50 +0000 (0:00:05.352) 0:07:46.723 ****** 2026-01-08 00:34:08.604542 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:34:08.604553 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:34:08.604567 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:34:08.604584 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:34:08.604602 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:34:08.604613 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:34:08.604624 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:34:08.604647 | orchestrator | 2026-01-08 00:34:08.604658 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-08 00:34:08.604669 | orchestrator | Thursday 08 January 2026 00:33:51 +0000 (0:00:00.537) 0:07:47.260 ****** 2026-01-08 00:34:08.604682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:34:08.604696 | orchestrator | 2026-01-08 00:34:08.604707 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-08 00:34:08.604736 | orchestrator | Thursday 08 January 2026 00:33:52 +0000 (0:00:01.097) 0:07:48.358 ****** 2026-01-08 00:34:08.604748 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.604759 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:08.604770 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:08.604781 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:08.604791 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:08.604802 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:08.604813 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:08.604823 | orchestrator | 2026-01-08 00:34:08.604834 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-08 00:34:08.604845 | orchestrator | Thursday 08 January 2026 00:33:54 +0000 (0:00:02.122) 0:07:50.481 ****** 2026-01-08 00:34:08.604856 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.604867 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:08.604877 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:08.604888 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:08.604899 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:08.604909 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:08.604920 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:08.604930 | orchestrator | 2026-01-08 00:34:08.604942 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-08 00:34:08.604953 | orchestrator | Thursday 08 January 2026 00:33:55 +0000 (0:00:01.234) 0:07:51.715 ****** 2026-01-08 00:34:08.604964 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:08.604974 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:08.604985 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:08.604996 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:08.605006 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:08.605017 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:08.605028 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:08.605039 | orchestrator | 2026-01-08 00:34:08.605050 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-08 00:34:08.605061 | orchestrator | Thursday 08 January 2026 00:33:56 +0000 (0:00:00.820) 0:07:52.536 ****** 2026-01-08 00:34:08.605072 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-08 00:34:08.605085 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-08 00:34:08.605096 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-08 00:34:08.605107 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-08 00:34:08.605118 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-08 00:34:08.605129 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-08 00:34:08.605140 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-08 00:34:08.605169 | orchestrator | 2026-01-08 00:34:08.605185 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-08 00:34:08.605196 | orchestrator | Thursday 08 January 2026 00:33:58 +0000 (0:00:01.953) 0:07:54.490 ****** 2026-01-08 00:34:08.605208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:34:08.605219 | orchestrator | 2026-01-08 00:34:08.605230 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-08 00:34:08.605241 | orchestrator | Thursday 08 January 2026 00:33:59 +0000 (0:00:00.804) 0:07:55.294 ****** 2026-01-08 00:34:08.605252 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:08.605263 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:08.605274 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:08.605285 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:08.605296 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:08.605306 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:08.605346 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:08.605358 | orchestrator | 2026-01-08 00:34:08.605378 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-08 00:34:41.608378 | orchestrator | Thursday 08 January 2026 00:34:08 +0000 (0:00:09.449) 0:08:04.744 ****** 2026-01-08 00:34:41.608493 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:41.608510 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:41.608522 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:41.608533 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:41.608544 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:41.608555 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:41.608566 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:41.608577 | orchestrator | 2026-01-08 00:34:41.608589 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-08 00:34:41.608600 | orchestrator | Thursday 08 January 2026 00:34:10 +0000 (0:00:02.001) 0:08:06.746 ****** 2026-01-08 00:34:41.608611 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:41.608622 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:41.608633 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:41.608644 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:41.608655 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:41.608665 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:41.608677 | orchestrator | 2026-01-08 00:34:41.608688 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-08 00:34:41.608699 | orchestrator | Thursday 08 January 2026 00:34:11 +0000 (0:00:01.399) 0:08:08.146 ****** 2026-01-08 00:34:41.608710 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:41.608722 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:41.608733 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:41.608760 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:41.608771 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:41.608792 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:41.608803 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:41.608814 | orchestrator | 2026-01-08 00:34:41.608825 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-08 00:34:41.608836 | orchestrator | 2026-01-08 00:34:41.608849 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-08 00:34:41.608861 | orchestrator | Thursday 08 January 2026 00:34:13 +0000 (0:00:01.434) 0:08:09.580 ****** 2026-01-08 00:34:41.608874 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:34:41.608888 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:34:41.608901 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:34:41.608913 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:34:41.608925 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:34:41.608938 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:34:41.608950 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:34:41.608987 | orchestrator | 2026-01-08 00:34:41.609000 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-08 00:34:41.609013 | orchestrator | 2026-01-08 00:34:41.609026 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-08 00:34:41.609038 | orchestrator | Thursday 08 January 2026 00:34:14 +0000 (0:00:00.728) 0:08:10.308 ****** 2026-01-08 00:34:41.609051 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:41.609064 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:41.609076 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:41.609087 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:41.609098 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:41.609109 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:41.609120 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:41.609131 | orchestrator | 2026-01-08 00:34:41.609142 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-08 00:34:41.609153 | orchestrator | Thursday 08 January 2026 00:34:15 +0000 (0:00:01.448) 0:08:11.757 ****** 2026-01-08 00:34:41.609164 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:41.609175 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:41.609186 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:41.609197 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:41.609208 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:41.609219 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:41.609229 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:41.609240 | orchestrator | 2026-01-08 00:34:41.609251 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-08 00:34:41.609285 | orchestrator | Thursday 08 January 2026 00:34:17 +0000 (0:00:01.516) 0:08:13.273 ****** 2026-01-08 00:34:41.609298 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:34:41.609309 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:34:41.609320 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:34:41.609331 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:34:41.609341 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:34:41.609352 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:34:41.609363 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:34:41.609374 | orchestrator | 2026-01-08 00:34:41.609385 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-08 00:34:41.609396 | orchestrator | Thursday 08 January 2026 00:34:17 +0000 (0:00:00.491) 0:08:13.765 ****** 2026-01-08 00:34:41.609422 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:34:41.609434 | orchestrator | 2026-01-08 00:34:41.609445 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-08 00:34:41.609456 | orchestrator | Thursday 08 January 2026 00:34:18 +0000 (0:00:01.147) 0:08:14.912 ****** 2026-01-08 00:34:41.609469 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:34:41.609482 | orchestrator | 2026-01-08 00:34:41.609493 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-08 00:34:41.609504 | orchestrator | Thursday 08 January 2026 00:34:19 +0000 (0:00:00.859) 0:08:15.772 ****** 2026-01-08 00:34:41.609515 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:41.609526 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:41.609537 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:41.609547 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:41.609558 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:41.609569 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:41.609580 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:41.609591 | orchestrator | 2026-01-08 00:34:41.609619 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-08 00:34:41.609631 | orchestrator | Thursday 08 January 2026 00:34:28 +0000 (0:00:09.196) 0:08:24.968 ****** 2026-01-08 00:34:41.609651 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:41.609662 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:41.609673 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:41.609684 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:41.609694 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:41.609705 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:41.609716 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:41.609726 | orchestrator | 2026-01-08 00:34:41.609737 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-08 00:34:41.609748 | orchestrator | Thursday 08 January 2026 00:34:29 +0000 (0:00:00.867) 0:08:25.836 ****** 2026-01-08 00:34:41.609759 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:41.609770 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:41.609780 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:41.609791 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:41.609802 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:41.609813 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:41.609823 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:41.609834 | orchestrator | 2026-01-08 00:34:41.609845 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-08 00:34:41.609856 | orchestrator | Thursday 08 January 2026 00:34:31 +0000 (0:00:01.488) 0:08:27.324 ****** 2026-01-08 00:34:41.609867 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:41.609877 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:41.609888 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:41.609899 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:41.609910 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:41.609920 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:41.609931 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:41.609942 | orchestrator | 2026-01-08 00:34:41.609953 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-08 00:34:41.609964 | orchestrator | Thursday 08 January 2026 00:34:34 +0000 (0:00:02.986) 0:08:30.311 ****** 2026-01-08 00:34:41.609975 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:41.609985 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:41.609996 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:41.610007 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:41.610078 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:41.610091 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:41.610102 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:41.610113 | orchestrator | 2026-01-08 00:34:41.610124 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-08 00:34:41.610135 | orchestrator | Thursday 08 January 2026 00:34:35 +0000 (0:00:01.354) 0:08:31.666 ****** 2026-01-08 00:34:41.610145 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:41.610156 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:41.610167 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:41.610178 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:41.610189 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:41.610200 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:41.610211 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:41.610221 | orchestrator | 2026-01-08 00:34:41.610232 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-08 00:34:41.610243 | orchestrator | 2026-01-08 00:34:41.610254 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-08 00:34:41.610290 | orchestrator | Thursday 08 January 2026 00:34:36 +0000 (0:00:01.164) 0:08:32.831 ****** 2026-01-08 00:34:41.610310 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:34:41.610330 | orchestrator | 2026-01-08 00:34:41.610349 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-08 00:34:41.610377 | orchestrator | Thursday 08 January 2026 00:34:37 +0000 (0:00:00.837) 0:08:33.668 ****** 2026-01-08 00:34:41.610389 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:41.610400 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:41.610411 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:41.610421 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:41.610432 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:41.610443 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:41.610453 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:41.610464 | orchestrator | 2026-01-08 00:34:41.610475 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-08 00:34:41.610486 | orchestrator | Thursday 08 January 2026 00:34:38 +0000 (0:00:01.044) 0:08:34.713 ****** 2026-01-08 00:34:41.610497 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:41.610508 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:41.610518 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:41.610529 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:41.610547 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:41.610558 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:41.610569 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:41.610580 | orchestrator | 2026-01-08 00:34:41.610591 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-08 00:34:41.610602 | orchestrator | Thursday 08 January 2026 00:34:39 +0000 (0:00:01.180) 0:08:35.893 ****** 2026-01-08 00:34:41.610613 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:34:41.610624 | orchestrator | 2026-01-08 00:34:41.610635 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-08 00:34:41.610646 | orchestrator | Thursday 08 January 2026 00:34:40 +0000 (0:00:01.009) 0:08:36.903 ****** 2026-01-08 00:34:41.610657 | orchestrator | ok: [testbed-manager] 2026-01-08 00:34:41.610668 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:34:41.610679 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:34:41.610690 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:34:41.610701 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:34:41.610711 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:34:41.610722 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:34:41.610733 | orchestrator | 2026-01-08 00:34:41.610752 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-08 00:34:43.353292 | orchestrator | Thursday 08 January 2026 00:34:41 +0000 (0:00:00.843) 0:08:37.746 ****** 2026-01-08 00:34:43.353395 | orchestrator | changed: [testbed-manager] 2026-01-08 00:34:43.353411 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:34:43.353423 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:34:43.353435 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:34:43.353446 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:34:43.353457 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:34:43.353468 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:34:43.353479 | orchestrator | 2026-01-08 00:34:43.353491 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:34:43.353503 | orchestrator | testbed-manager : ok=168  changed=41  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-08 00:34:43.353516 | orchestrator | testbed-node-0 : ok=177  changed=70  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-08 00:34:43.353527 | orchestrator | testbed-node-1 : ok=177  changed=70  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-08 00:34:43.353538 | orchestrator | testbed-node-2 : ok=177  changed=70  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-08 00:34:43.353548 | orchestrator | testbed-node-3 : ok=175  changed=66  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-08 00:34:43.353593 | orchestrator | testbed-node-4 : ok=175  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-08 00:34:43.353629 | orchestrator | testbed-node-5 : ok=175  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-08 00:34:43.353647 | orchestrator | 2026-01-08 00:34:43.353666 | orchestrator | 2026-01-08 00:34:43.353684 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:34:43.353704 | orchestrator | Thursday 08 January 2026 00:34:42 +0000 (0:00:01.209) 0:08:38.956 ****** 2026-01-08 00:34:43.353724 | orchestrator | =============================================================================== 2026-01-08 00:34:43.353743 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.09s 2026-01-08 00:34:43.353762 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.68s 2026-01-08 00:34:43.353781 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.14s 2026-01-08 00:34:43.353801 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------ 18.33s 2026-01-08 00:34:43.353821 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.13s 2026-01-08 00:34:43.353842 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.84s 2026-01-08 00:34:43.353862 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.69s 2026-01-08 00:34:43.353882 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.94s 2026-01-08 00:34:43.353911 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.98s 2026-01-08 00:34:43.353940 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.78s 2026-01-08 00:34:43.353959 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.64s 2026-01-08 00:34:43.353979 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.45s 2026-01-08 00:34:43.354000 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.20s 2026-01-08 00:34:43.354100 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.70s 2026-01-08 00:34:43.354123 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.12s 2026-01-08 00:34:43.354143 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.02s 2026-01-08 00:34:43.354163 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.79s 2026-01-08 00:34:43.354201 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.45s 2026-01-08 00:34:43.354221 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.88s 2026-01-08 00:34:43.354238 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.45s 2026-01-08 00:34:43.671580 | orchestrator | + osism apply fail2ban 2026-01-08 00:34:56.289229 | orchestrator | 2026-01-08 00:34:56 | INFO  | Task e82d256e-1f7e-46fe-89fd-8c7ed0e32f0b (fail2ban) was prepared for execution. 2026-01-08 00:34:56.289412 | orchestrator | 2026-01-08 00:34:56 | INFO  | It takes a moment until task e82d256e-1f7e-46fe-89fd-8c7ed0e32f0b (fail2ban) has been started and output is visible here. 2026-01-08 00:35:18.058509 | orchestrator | 2026-01-08 00:35:18.058640 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-08 00:35:18.058657 | orchestrator | 2026-01-08 00:35:18.058670 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-08 00:35:18.058682 | orchestrator | Thursday 08 January 2026 00:35:00 +0000 (0:00:00.277) 0:00:00.277 ****** 2026-01-08 00:35:18.058694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:35:18.058753 | orchestrator | 2026-01-08 00:35:18.058766 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-08 00:35:18.058777 | orchestrator | Thursday 08 January 2026 00:35:02 +0000 (0:00:01.148) 0:00:01.425 ****** 2026-01-08 00:35:18.058788 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:35:18.058800 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:35:18.058811 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:35:18.058821 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:35:18.058832 | orchestrator | changed: [testbed-manager] 2026-01-08 00:35:18.058843 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:35:18.058853 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:35:18.058864 | orchestrator | 2026-01-08 00:35:18.058875 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-08 00:35:18.058886 | orchestrator | Thursday 08 January 2026 00:35:12 +0000 (0:00:10.906) 0:00:12.331 ****** 2026-01-08 00:35:18.058897 | orchestrator | changed: [testbed-manager] 2026-01-08 00:35:18.058908 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:35:18.058918 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:35:18.058929 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:35:18.058940 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:35:18.058950 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:35:18.058961 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:35:18.058971 | orchestrator | 2026-01-08 00:35:18.058982 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-08 00:35:18.058993 | orchestrator | Thursday 08 January 2026 00:35:14 +0000 (0:00:01.491) 0:00:13.823 ****** 2026-01-08 00:35:18.059005 | orchestrator | ok: [testbed-manager] 2026-01-08 00:35:18.059017 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:35:18.059027 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:35:18.059038 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:35:18.059049 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:35:18.059059 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:35:18.059070 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:35:18.059081 | orchestrator | 2026-01-08 00:35:18.059092 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-08 00:35:18.059102 | orchestrator | Thursday 08 January 2026 00:35:15 +0000 (0:00:01.471) 0:00:15.294 ****** 2026-01-08 00:35:18.059113 | orchestrator | changed: [testbed-manager] 2026-01-08 00:35:18.059124 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:35:18.059135 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:35:18.059146 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:35:18.059157 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:35:18.059168 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:35:18.059179 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:35:18.059215 | orchestrator | 2026-01-08 00:35:18.059226 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:35:18.059238 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:35:18.059250 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:35:18.059261 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:35:18.059272 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:35:18.059283 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:35:18.059294 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:35:18.059328 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:35:18.059339 | orchestrator | 2026-01-08 00:35:18.059350 | orchestrator | 2026-01-08 00:35:18.059361 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:35:18.059372 | orchestrator | Thursday 08 January 2026 00:35:17 +0000 (0:00:01.724) 0:00:17.019 ****** 2026-01-08 00:35:18.059383 | orchestrator | =============================================================================== 2026-01-08 00:35:18.059410 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.91s 2026-01-08 00:35:18.059421 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.72s 2026-01-08 00:35:18.059432 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.49s 2026-01-08 00:35:18.059443 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.47s 2026-01-08 00:35:18.059454 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.15s 2026-01-08 00:35:18.409724 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-08 00:35:18.409846 | orchestrator | + osism apply network 2026-01-08 00:35:30.660274 | orchestrator | 2026-01-08 00:35:30 | INFO  | Task ddce46d2-c0aa-4470-9e45-726964fd3954 (network) was prepared for execution. 2026-01-08 00:35:30.660350 | orchestrator | 2026-01-08 00:35:30 | INFO  | It takes a moment until task ddce46d2-c0aa-4470-9e45-726964fd3954 (network) has been started and output is visible here. 2026-01-08 00:35:59.772895 | orchestrator | 2026-01-08 00:35:59.772982 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-08 00:35:59.772992 | orchestrator | 2026-01-08 00:35:59.773000 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-08 00:35:59.773008 | orchestrator | Thursday 08 January 2026 00:35:34 +0000 (0:00:00.260) 0:00:00.260 ****** 2026-01-08 00:35:59.773015 | orchestrator | ok: [testbed-manager] 2026-01-08 00:35:59.773024 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:35:59.773031 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:35:59.773038 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:35:59.773045 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:35:59.773052 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:35:59.773058 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:35:59.773065 | orchestrator | 2026-01-08 00:35:59.773072 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-08 00:35:59.773079 | orchestrator | Thursday 08 January 2026 00:35:35 +0000 (0:00:00.737) 0:00:00.997 ****** 2026-01-08 00:35:59.773086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:35:59.773117 | orchestrator | 2026-01-08 00:35:59.773125 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-08 00:35:59.773132 | orchestrator | Thursday 08 January 2026 00:35:36 +0000 (0:00:01.204) 0:00:02.202 ****** 2026-01-08 00:35:59.773138 | orchestrator | ok: [testbed-manager] 2026-01-08 00:35:59.773145 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:35:59.773152 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:35:59.773159 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:35:59.773165 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:35:59.773172 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:35:59.773178 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:35:59.773185 | orchestrator | 2026-01-08 00:35:59.773192 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-08 00:35:59.773199 | orchestrator | Thursday 08 January 2026 00:35:39 +0000 (0:00:02.128) 0:00:04.330 ****** 2026-01-08 00:35:59.773206 | orchestrator | ok: [testbed-manager] 2026-01-08 00:35:59.773213 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:35:59.773219 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:35:59.773226 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:35:59.773252 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:35:59.773260 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:35:59.773266 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:35:59.773273 | orchestrator | 2026-01-08 00:35:59.773279 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-08 00:35:59.773286 | orchestrator | Thursday 08 January 2026 00:35:40 +0000 (0:00:01.765) 0:00:06.095 ****** 2026-01-08 00:35:59.773293 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-08 00:35:59.773301 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-08 00:35:59.773307 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-08 00:35:59.773314 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-08 00:35:59.773321 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-08 00:35:59.773328 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-08 00:35:59.773335 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-08 00:35:59.773341 | orchestrator | 2026-01-08 00:35:59.773348 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-08 00:35:59.773355 | orchestrator | Thursday 08 January 2026 00:35:41 +0000 (0:00:00.974) 0:00:07.070 ****** 2026-01-08 00:35:59.773362 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 00:35:59.773369 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-08 00:35:59.773376 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 00:35:59.773383 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-08 00:35:59.773390 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-08 00:35:59.773396 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-08 00:35:59.773403 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-08 00:35:59.773410 | orchestrator | 2026-01-08 00:35:59.773416 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-08 00:35:59.773425 | orchestrator | Thursday 08 January 2026 00:35:45 +0000 (0:00:03.432) 0:00:10.502 ****** 2026-01-08 00:35:59.773433 | orchestrator | changed: [testbed-manager] 2026-01-08 00:35:59.773441 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:35:59.773448 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:35:59.773456 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:35:59.773463 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:35:59.773471 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:35:59.773479 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:35:59.773486 | orchestrator | 2026-01-08 00:35:59.773494 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-08 00:35:59.773503 | orchestrator | Thursday 08 January 2026 00:35:46 +0000 (0:00:01.657) 0:00:12.160 ****** 2026-01-08 00:35:59.773511 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 00:35:59.773518 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 00:35:59.773526 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-08 00:35:59.773534 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-08 00:35:59.773541 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-08 00:35:59.773550 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-08 00:35:59.773558 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-08 00:35:59.773566 | orchestrator | 2026-01-08 00:35:59.773574 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-08 00:35:59.773582 | orchestrator | Thursday 08 January 2026 00:35:48 +0000 (0:00:01.771) 0:00:13.931 ****** 2026-01-08 00:35:59.773590 | orchestrator | ok: [testbed-manager] 2026-01-08 00:35:59.773598 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:35:59.773606 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:35:59.773613 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:35:59.773621 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:35:59.773629 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:35:59.773636 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:35:59.773643 | orchestrator | 2026-01-08 00:35:59.773651 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-08 00:35:59.773678 | orchestrator | Thursday 08 January 2026 00:35:49 +0000 (0:00:01.183) 0:00:15.115 ****** 2026-01-08 00:35:59.773687 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:35:59.773695 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:35:59.773703 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:35:59.773710 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:35:59.773718 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:35:59.773726 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:35:59.773733 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:35:59.773741 | orchestrator | 2026-01-08 00:35:59.773763 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-08 00:35:59.773772 | orchestrator | Thursday 08 January 2026 00:35:50 +0000 (0:00:00.684) 0:00:15.800 ****** 2026-01-08 00:35:59.773780 | orchestrator | ok: [testbed-manager] 2026-01-08 00:35:59.773789 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:35:59.773797 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:35:59.773803 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:35:59.773810 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:35:59.773816 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:35:59.773823 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:35:59.773829 | orchestrator | 2026-01-08 00:35:59.773836 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-08 00:35:59.773843 | orchestrator | Thursday 08 January 2026 00:35:52 +0000 (0:00:02.232) 0:00:18.032 ****** 2026-01-08 00:35:59.773850 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:35:59.773856 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:35:59.773863 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:35:59.773870 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:35:59.773876 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:35:59.773883 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:35:59.773890 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-08 00:35:59.773898 | orchestrator | 2026-01-08 00:35:59.773905 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-08 00:35:59.773911 | orchestrator | Thursday 08 January 2026 00:35:53 +0000 (0:00:00.911) 0:00:18.943 ****** 2026-01-08 00:35:59.773918 | orchestrator | ok: [testbed-manager] 2026-01-08 00:35:59.773925 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:35:59.773931 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:35:59.773938 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:35:59.773944 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:35:59.773951 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:35:59.773957 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:35:59.773964 | orchestrator | 2026-01-08 00:35:59.773971 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-08 00:35:59.773977 | orchestrator | Thursday 08 January 2026 00:35:55 +0000 (0:00:01.700) 0:00:20.644 ****** 2026-01-08 00:35:59.773984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:35:59.773993 | orchestrator | 2026-01-08 00:35:59.773999 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-08 00:35:59.774006 | orchestrator | Thursday 08 January 2026 00:35:56 +0000 (0:00:01.318) 0:00:21.962 ****** 2026-01-08 00:35:59.774013 | orchestrator | ok: [testbed-manager] 2026-01-08 00:35:59.774063 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:35:59.774073 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:35:59.774084 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:35:59.774110 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:35:59.774117 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:35:59.774124 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:35:59.774140 | orchestrator | 2026-01-08 00:35:59.774148 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-08 00:35:59.774160 | orchestrator | Thursday 08 January 2026 00:35:57 +0000 (0:00:01.133) 0:00:23.096 ****** 2026-01-08 00:35:59.774180 | orchestrator | ok: [testbed-manager] 2026-01-08 00:35:59.774194 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:35:59.774201 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:35:59.774208 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:35:59.774214 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:35:59.774221 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:35:59.774228 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:35:59.774234 | orchestrator | 2026-01-08 00:35:59.774241 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-08 00:35:59.774248 | orchestrator | Thursday 08 January 2026 00:35:58 +0000 (0:00:00.669) 0:00:23.765 ****** 2026-01-08 00:35:59.774255 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-08 00:35:59.774262 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-08 00:35:59.774269 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-08 00:35:59.774275 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-08 00:35:59.774286 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-08 00:35:59.774293 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-08 00:35:59.774300 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-08 00:35:59.774306 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-08 00:35:59.774313 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-08 00:35:59.774320 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-08 00:35:59.774326 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-08 00:35:59.774333 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-08 00:35:59.774340 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-08 00:35:59.774346 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-08 00:35:59.774353 | orchestrator | 2026-01-08 00:35:59.774366 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-08 00:36:16.878964 | orchestrator | Thursday 08 January 2026 00:35:59 +0000 (0:00:01.257) 0:00:25.023 ****** 2026-01-08 00:36:16.879105 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:36:16.879127 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:36:16.879141 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:36:16.879153 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:36:16.879173 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:36:16.879192 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:36:16.879206 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:36:16.879220 | orchestrator | 2026-01-08 00:36:16.879237 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-08 00:36:16.879280 | orchestrator | Thursday 08 January 2026 00:36:00 +0000 (0:00:00.697) 0:00:25.721 ****** 2026-01-08 00:36:16.879298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:36:16.879315 | orchestrator | 2026-01-08 00:36:16.879329 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-08 00:36:16.879345 | orchestrator | Thursday 08 January 2026 00:36:05 +0000 (0:00:04.631) 0:00:30.352 ****** 2026-01-08 00:36:16.879361 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879430 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879601 | orchestrator | 2026-01-08 00:36:16.879615 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-08 00:36:16.879629 | orchestrator | Thursday 08 January 2026 00:36:11 +0000 (0:00:05.949) 0:00:36.302 ****** 2026-01-08 00:36:16.879655 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-08 00:36:16.879763 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879819 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:16.879862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:30.374475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-08 00:36:30.374611 | orchestrator | 2026-01-08 00:36:30.374631 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-08 00:36:30.374645 | orchestrator | Thursday 08 January 2026 00:36:16 +0000 (0:00:05.820) 0:00:42.123 ****** 2026-01-08 00:36:30.374657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:36:30.374669 | orchestrator | 2026-01-08 00:36:30.374680 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-08 00:36:30.374691 | orchestrator | Thursday 08 January 2026 00:36:18 +0000 (0:00:01.311) 0:00:43.434 ****** 2026-01-08 00:36:30.374703 | orchestrator | ok: [testbed-manager] 2026-01-08 00:36:30.374715 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:36:30.374728 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:36:30.374747 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:36:30.374765 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:36:30.374783 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:36:30.374800 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:36:30.374818 | orchestrator | 2026-01-08 00:36:30.374836 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-08 00:36:30.374848 | orchestrator | Thursday 08 January 2026 00:36:19 +0000 (0:00:01.197) 0:00:44.631 ****** 2026-01-08 00:36:30.374859 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-08 00:36:30.374871 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-08 00:36:30.374881 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-08 00:36:30.374892 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-08 00:36:30.374903 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-08 00:36:30.374913 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-08 00:36:30.374924 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-08 00:36:30.374935 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-08 00:36:30.374946 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:36:30.374958 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-08 00:36:30.374969 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-08 00:36:30.374980 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-08 00:36:30.374990 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-08 00:36:30.375001 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:36:30.375012 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-08 00:36:30.375025 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-08 00:36:30.375067 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-08 00:36:30.375080 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-08 00:36:30.375093 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:36:30.375105 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-08 00:36:30.375118 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-08 00:36:30.375131 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-08 00:36:30.375144 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-08 00:36:30.375167 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:36:30.375193 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-08 00:36:30.375206 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-08 00:36:30.375220 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-08 00:36:30.375232 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-08 00:36:30.375245 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:36:30.375259 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:36:30.375272 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-08 00:36:30.375282 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-08 00:36:30.375293 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-08 00:36:30.375304 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-08 00:36:30.375315 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:36:30.375325 | orchestrator | 2026-01-08 00:36:30.375336 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-01-08 00:36:30.375365 | orchestrator | Thursday 08 January 2026 00:36:20 +0000 (0:00:00.947) 0:00:45.579 ****** 2026-01-08 00:36:30.375377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:36:30.375389 | orchestrator | 2026-01-08 00:36:30.375400 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-01-08 00:36:30.375411 | orchestrator | Thursday 08 January 2026 00:36:21 +0000 (0:00:01.281) 0:00:46.861 ****** 2026-01-08 00:36:30.375422 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:36:30.375432 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:36:30.375443 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:36:30.375454 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:36:30.375465 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:36:30.375476 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:36:30.375486 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:36:30.375497 | orchestrator | 2026-01-08 00:36:30.375508 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-01-08 00:36:30.375519 | orchestrator | Thursday 08 January 2026 00:36:22 +0000 (0:00:00.655) 0:00:47.516 ****** 2026-01-08 00:36:30.375530 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:36:30.375540 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:36:30.375551 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:36:30.375561 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:36:30.375572 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:36:30.375583 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:36:30.375593 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:36:30.375604 | orchestrator | 2026-01-08 00:36:30.375615 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-01-08 00:36:30.375626 | orchestrator | Thursday 08 January 2026 00:36:23 +0000 (0:00:00.850) 0:00:48.366 ****** 2026-01-08 00:36:30.375637 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:36:30.375647 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:36:30.375658 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:36:30.375669 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:36:30.375679 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:36:30.375690 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:36:30.375701 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:36:30.375712 | orchestrator | 2026-01-08 00:36:30.375722 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-01-08 00:36:30.375740 | orchestrator | Thursday 08 January 2026 00:36:23 +0000 (0:00:00.665) 0:00:49.032 ****** 2026-01-08 00:36:30.375769 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:36:30.375787 | orchestrator | ok: [testbed-manager] 2026-01-08 00:36:30.375805 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:36:30.375824 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:36:30.375843 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:36:30.375860 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:36:30.375878 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:36:30.375896 | orchestrator | 2026-01-08 00:36:30.375914 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-01-08 00:36:30.375934 | orchestrator | Thursday 08 January 2026 00:36:25 +0000 (0:00:01.831) 0:00:50.863 ****** 2026-01-08 00:36:30.375952 | orchestrator | ok: [testbed-manager] 2026-01-08 00:36:30.375972 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:36:30.375984 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:36:30.375994 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:36:30.376005 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:36:30.376016 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:36:30.376027 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:36:30.376078 | orchestrator | 2026-01-08 00:36:30.376090 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-01-08 00:36:30.376101 | orchestrator | Thursday 08 January 2026 00:36:26 +0000 (0:00:01.016) 0:00:51.879 ****** 2026-01-08 00:36:30.376112 | orchestrator | ok: [testbed-manager] 2026-01-08 00:36:30.376123 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:36:30.376134 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:36:30.376145 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:36:30.376156 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:36:30.376166 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:36:30.376177 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:36:30.376188 | orchestrator | 2026-01-08 00:36:30.376199 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-08 00:36:30.376210 | orchestrator | Thursday 08 January 2026 00:36:28 +0000 (0:00:02.318) 0:00:54.198 ****** 2026-01-08 00:36:30.376221 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:36:30.376232 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:36:30.376243 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:36:30.376254 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:36:30.376265 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:36:30.376275 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:36:30.376286 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:36:30.376297 | orchestrator | 2026-01-08 00:36:30.376315 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-08 00:36:30.376327 | orchestrator | Thursday 08 January 2026 00:36:29 +0000 (0:00:00.838) 0:00:55.036 ****** 2026-01-08 00:36:30.376338 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:36:30.376349 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:36:30.376360 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:36:30.376371 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:36:30.376382 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:36:30.376393 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:36:30.376404 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:36:30.376415 | orchestrator | 2026-01-08 00:36:30.376426 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:36:30.376438 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-08 00:36:30.376451 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 00:36:30.376473 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 00:36:30.807134 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 00:36:30.807226 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 00:36:30.807233 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 00:36:30.807237 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 00:36:30.807241 | orchestrator | 2026-01-08 00:36:30.807246 | orchestrator | 2026-01-08 00:36:30.807251 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:36:30.807256 | orchestrator | Thursday 08 January 2026 00:36:30 +0000 (0:00:00.589) 0:00:55.625 ****** 2026-01-08 00:36:30.807260 | orchestrator | =============================================================================== 2026-01-08 00:36:30.807263 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.95s 2026-01-08 00:36:30.807267 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.82s 2026-01-08 00:36:30.807271 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.63s 2026-01-08 00:36:30.807275 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.43s 2026-01-08 00:36:30.807278 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.32s 2026-01-08 00:36:30.807282 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.23s 2026-01-08 00:36:30.807286 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.13s 2026-01-08 00:36:30.807290 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.83s 2026-01-08 00:36:30.807293 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.77s 2026-01-08 00:36:30.807297 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2026-01-08 00:36:30.807301 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.70s 2026-01-08 00:36:30.807305 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.66s 2026-01-08 00:36:30.807308 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2026-01-08 00:36:30.807312 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.31s 2026-01-08 00:36:30.807316 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.28s 2026-01-08 00:36:30.807319 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2026-01-08 00:36:30.807323 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.20s 2026-01-08 00:36:30.807327 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.20s 2026-01-08 00:36:30.807331 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.18s 2026-01-08 00:36:30.807334 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.13s 2026-01-08 00:36:31.127722 | orchestrator | + osism apply wireguard 2026-01-08 00:36:43.219364 | orchestrator | 2026-01-08 00:36:43 | INFO  | Task 82d2252f-23b2-4611-9c70-d73b9b151a5a (wireguard) was prepared for execution. 2026-01-08 00:36:43.219451 | orchestrator | 2026-01-08 00:36:43 | INFO  | It takes a moment until task 82d2252f-23b2-4611-9c70-d73b9b151a5a (wireguard) has been started and output is visible here. 2026-01-08 00:37:03.990758 | orchestrator | 2026-01-08 00:37:03.990831 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-08 00:37:03.990838 | orchestrator | 2026-01-08 00:37:03.990844 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-08 00:37:03.990849 | orchestrator | Thursday 08 January 2026 00:36:47 +0000 (0:00:00.223) 0:00:00.223 ****** 2026-01-08 00:37:03.990856 | orchestrator | ok: [testbed-manager] 2026-01-08 00:37:03.990876 | orchestrator | 2026-01-08 00:37:03.990881 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-08 00:37:03.990885 | orchestrator | Thursday 08 January 2026 00:36:49 +0000 (0:00:01.578) 0:00:01.801 ****** 2026-01-08 00:37:03.990888 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:03.990893 | orchestrator | 2026-01-08 00:37:03.990897 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-08 00:37:03.990901 | orchestrator | Thursday 08 January 2026 00:36:56 +0000 (0:00:06.997) 0:00:08.799 ****** 2026-01-08 00:37:03.990905 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:03.990909 | orchestrator | 2026-01-08 00:37:03.990912 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-08 00:37:03.990916 | orchestrator | Thursday 08 January 2026 00:36:56 +0000 (0:00:00.563) 0:00:09.363 ****** 2026-01-08 00:37:03.990920 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:03.990924 | orchestrator | 2026-01-08 00:37:03.990927 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-08 00:37:03.990931 | orchestrator | Thursday 08 January 2026 00:36:57 +0000 (0:00:00.441) 0:00:09.804 ****** 2026-01-08 00:37:03.990935 | orchestrator | ok: [testbed-manager] 2026-01-08 00:37:03.990939 | orchestrator | 2026-01-08 00:37:03.990942 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-08 00:37:03.990946 | orchestrator | Thursday 08 January 2026 00:36:57 +0000 (0:00:00.662) 0:00:10.467 ****** 2026-01-08 00:37:03.990950 | orchestrator | ok: [testbed-manager] 2026-01-08 00:37:03.990953 | orchestrator | 2026-01-08 00:37:03.990957 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-08 00:37:03.990993 | orchestrator | Thursday 08 January 2026 00:36:58 +0000 (0:00:00.422) 0:00:10.890 ****** 2026-01-08 00:37:03.990998 | orchestrator | ok: [testbed-manager] 2026-01-08 00:37:03.991002 | orchestrator | 2026-01-08 00:37:03.991006 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-08 00:37:03.991009 | orchestrator | Thursday 08 January 2026 00:36:58 +0000 (0:00:00.398) 0:00:11.288 ****** 2026-01-08 00:37:03.991013 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:03.991017 | orchestrator | 2026-01-08 00:37:03.991021 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-08 00:37:03.991024 | orchestrator | Thursday 08 January 2026 00:36:59 +0000 (0:00:01.244) 0:00:12.532 ****** 2026-01-08 00:37:03.991028 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-08 00:37:03.991032 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:03.991036 | orchestrator | 2026-01-08 00:37:03.991040 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-08 00:37:03.991044 | orchestrator | Thursday 08 January 2026 00:37:00 +0000 (0:00:01.003) 0:00:13.536 ****** 2026-01-08 00:37:03.991048 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:03.991052 | orchestrator | 2026-01-08 00:37:03.991055 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-08 00:37:03.991059 | orchestrator | Thursday 08 January 2026 00:37:02 +0000 (0:00:01.744) 0:00:15.280 ****** 2026-01-08 00:37:03.991063 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:03.991067 | orchestrator | 2026-01-08 00:37:03.991070 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:37:03.991074 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:37:03.991079 | orchestrator | 2026-01-08 00:37:03.991083 | orchestrator | 2026-01-08 00:37:03.991087 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:37:03.991091 | orchestrator | Thursday 08 January 2026 00:37:03 +0000 (0:00:00.987) 0:00:16.267 ****** 2026-01-08 00:37:03.991094 | orchestrator | =============================================================================== 2026-01-08 00:37:03.991098 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.00s 2026-01-08 00:37:03.991102 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.74s 2026-01-08 00:37:03.991110 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.58s 2026-01-08 00:37:03.991114 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.24s 2026-01-08 00:37:03.991118 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.00s 2026-01-08 00:37:03.991122 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2026-01-08 00:37:03.991125 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.66s 2026-01-08 00:37:03.991129 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-01-08 00:37:03.991133 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-01-08 00:37:03.991137 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-01-08 00:37:03.991150 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2026-01-08 00:37:04.308187 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-08 00:37:04.344166 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-08 00:37:04.344239 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-08 00:37:04.417438 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 191 0 --:--:-- --:--:-- --:--:-- 191 2026-01-08 00:37:04.432506 | orchestrator | + osism apply --environment custom workarounds 2026-01-08 00:37:06.464758 | orchestrator | 2026-01-08 00:37:06 | INFO  | Trying to run play workarounds in environment custom 2026-01-08 00:37:16.574239 | orchestrator | 2026-01-08 00:37:16 | INFO  | Task 76263232-a1d8-4454-94d9-4a26762670b9 (workarounds) was prepared for execution. 2026-01-08 00:37:16.574320 | orchestrator | 2026-01-08 00:37:16 | INFO  | It takes a moment until task 76263232-a1d8-4454-94d9-4a26762670b9 (workarounds) has been started and output is visible here. 2026-01-08 00:37:42.940536 | orchestrator | 2026-01-08 00:37:42.940642 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:37:42.940655 | orchestrator | 2026-01-08 00:37:42.940663 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-08 00:37:42.940671 | orchestrator | Thursday 08 January 2026 00:37:20 +0000 (0:00:00.131) 0:00:00.131 ****** 2026-01-08 00:37:42.940679 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-08 00:37:42.940687 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-08 00:37:42.940695 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-08 00:37:42.940702 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-08 00:37:42.940710 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-08 00:37:42.940717 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-08 00:37:42.940724 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-08 00:37:42.940732 | orchestrator | 2026-01-08 00:37:42.940739 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-08 00:37:42.940746 | orchestrator | 2026-01-08 00:37:42.940754 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-08 00:37:42.940761 | orchestrator | Thursday 08 January 2026 00:37:21 +0000 (0:00:00.834) 0:00:00.966 ****** 2026-01-08 00:37:42.940769 | orchestrator | ok: [testbed-manager] 2026-01-08 00:37:42.940778 | orchestrator | 2026-01-08 00:37:42.940785 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-08 00:37:42.940792 | orchestrator | 2026-01-08 00:37:42.940800 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-08 00:37:42.940807 | orchestrator | Thursday 08 January 2026 00:37:24 +0000 (0:00:02.421) 0:00:03.387 ****** 2026-01-08 00:37:42.940836 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:37:42.940844 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:37:42.940851 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:37:42.940862 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:37:42.940873 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:37:42.940884 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:37:42.940922 | orchestrator | 2026-01-08 00:37:42.940935 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-08 00:37:42.940947 | orchestrator | 2026-01-08 00:37:42.940960 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-08 00:37:42.940973 | orchestrator | Thursday 08 January 2026 00:37:25 +0000 (0:00:01.830) 0:00:05.217 ****** 2026-01-08 00:37:42.940986 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-08 00:37:42.940999 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-08 00:37:42.941007 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-08 00:37:42.941014 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-08 00:37:42.941022 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-08 00:37:42.941029 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-08 00:37:42.941036 | orchestrator | 2026-01-08 00:37:42.941044 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-08 00:37:42.941051 | orchestrator | Thursday 08 January 2026 00:37:27 +0000 (0:00:01.509) 0:00:06.727 ****** 2026-01-08 00:37:42.941059 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:37:42.941067 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:37:42.941076 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:37:42.941084 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:37:42.941099 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:37:42.941113 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:37:42.941128 | orchestrator | 2026-01-08 00:37:42.941142 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-08 00:37:42.941158 | orchestrator | Thursday 08 January 2026 00:37:31 +0000 (0:00:03.913) 0:00:10.640 ****** 2026-01-08 00:37:42.941172 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:37:42.941184 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:37:42.941193 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:37:42.941201 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:37:42.941210 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:37:42.941219 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:37:42.941227 | orchestrator | 2026-01-08 00:37:42.941236 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-08 00:37:42.941245 | orchestrator | 2026-01-08 00:37:42.941254 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-08 00:37:42.941263 | orchestrator | Thursday 08 January 2026 00:37:32 +0000 (0:00:00.710) 0:00:11.350 ****** 2026-01-08 00:37:42.941271 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:37:42.941280 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:37:42.941289 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:37:42.941298 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:37:42.941307 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:37:42.941315 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:37:42.941324 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:42.941332 | orchestrator | 2026-01-08 00:37:42.941341 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-08 00:37:42.941350 | orchestrator | Thursday 08 January 2026 00:37:33 +0000 (0:00:01.607) 0:00:12.958 ****** 2026-01-08 00:37:42.941359 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:37:42.941402 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:37:42.941420 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:37:42.941435 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:37:42.941446 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:37:42.941455 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:37:42.941480 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:42.941489 | orchestrator | 2026-01-08 00:37:42.941498 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-08 00:37:42.941507 | orchestrator | Thursday 08 January 2026 00:37:35 +0000 (0:00:01.591) 0:00:14.550 ****** 2026-01-08 00:37:42.941516 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:37:42.941525 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:37:42.941533 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:37:42.941542 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:37:42.941551 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:37:42.941559 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:37:42.941568 | orchestrator | ok: [testbed-manager] 2026-01-08 00:37:42.941576 | orchestrator | 2026-01-08 00:37:42.941585 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-08 00:37:42.941594 | orchestrator | Thursday 08 January 2026 00:37:36 +0000 (0:00:01.554) 0:00:16.105 ****** 2026-01-08 00:37:42.941602 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:37:42.941611 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:37:42.941620 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:37:42.941628 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:37:42.941637 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:37:42.941645 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:37:42.941654 | orchestrator | changed: [testbed-manager] 2026-01-08 00:37:42.941662 | orchestrator | 2026-01-08 00:37:42.941671 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-08 00:37:42.941680 | orchestrator | Thursday 08 January 2026 00:37:38 +0000 (0:00:01.880) 0:00:17.985 ****** 2026-01-08 00:37:42.941688 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:37:42.941697 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:37:42.941705 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:37:42.941714 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:37:42.941722 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:37:42.941731 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:37:42.941739 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:37:42.941748 | orchestrator | 2026-01-08 00:37:42.941757 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-08 00:37:42.941765 | orchestrator | 2026-01-08 00:37:42.941774 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-08 00:37:42.941783 | orchestrator | Thursday 08 January 2026 00:37:39 +0000 (0:00:00.638) 0:00:18.624 ****** 2026-01-08 00:37:42.941791 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:37:42.941800 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:37:42.941815 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:37:42.941829 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:37:42.941844 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:37:42.941858 | orchestrator | ok: [testbed-manager] 2026-01-08 00:37:42.941873 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:37:42.941882 | orchestrator | 2026-01-08 00:37:42.941928 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:37:42.941940 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:37:42.941950 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:37:42.941959 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:37:42.941968 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:37:42.941985 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:37:42.941994 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:37:42.942003 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:37:42.942012 | orchestrator | 2026-01-08 00:37:42.942066 | orchestrator | 2026-01-08 00:37:42.942076 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:37:42.942085 | orchestrator | Thursday 08 January 2026 00:37:42 +0000 (0:00:03.607) 0:00:22.232 ****** 2026-01-08 00:37:42.942094 | orchestrator | =============================================================================== 2026-01-08 00:37:42.942102 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.91s 2026-01-08 00:37:42.942111 | orchestrator | Install python3-docker -------------------------------------------------- 3.61s 2026-01-08 00:37:42.942119 | orchestrator | Apply netplan configuration --------------------------------------------- 2.42s 2026-01-08 00:37:42.942128 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.88s 2026-01-08 00:37:42.942136 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2026-01-08 00:37:42.942145 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2026-01-08 00:37:42.942154 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2026-01-08 00:37:42.942162 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.56s 2026-01-08 00:37:42.942171 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.51s 2026-01-08 00:37:42.942185 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2026-01-08 00:37:42.942194 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2026-01-08 00:37:42.942211 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2026-01-08 00:37:43.633722 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-08 00:37:55.872378 | orchestrator | 2026-01-08 00:37:55 | INFO  | Task 13261bd8-5a6e-4ba3-866f-532fff9f9118 (reboot) was prepared for execution. 2026-01-08 00:37:55.872491 | orchestrator | 2026-01-08 00:37:55 | INFO  | It takes a moment until task 13261bd8-5a6e-4ba3-866f-532fff9f9118 (reboot) has been started and output is visible here. 2026-01-08 00:38:06.483338 | orchestrator | 2026-01-08 00:38:06.483458 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-08 00:38:06.483473 | orchestrator | 2026-01-08 00:38:06.483484 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-08 00:38:06.483494 | orchestrator | Thursday 08 January 2026 00:38:00 +0000 (0:00:00.205) 0:00:00.205 ****** 2026-01-08 00:38:06.483504 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:38:06.483556 | orchestrator | 2026-01-08 00:38:06.483567 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-08 00:38:06.483577 | orchestrator | Thursday 08 January 2026 00:38:00 +0000 (0:00:00.098) 0:00:00.304 ****** 2026-01-08 00:38:06.483586 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:38:06.483595 | orchestrator | 2026-01-08 00:38:06.483604 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-08 00:38:06.483613 | orchestrator | Thursday 08 January 2026 00:38:01 +0000 (0:00:00.992) 0:00:01.296 ****** 2026-01-08 00:38:06.483622 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:38:06.483631 | orchestrator | 2026-01-08 00:38:06.483640 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-08 00:38:06.483671 | orchestrator | 2026-01-08 00:38:06.483680 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-08 00:38:06.483689 | orchestrator | Thursday 08 January 2026 00:38:01 +0000 (0:00:00.118) 0:00:01.414 ****** 2026-01-08 00:38:06.483698 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:38:06.483706 | orchestrator | 2026-01-08 00:38:06.483715 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-08 00:38:06.483724 | orchestrator | Thursday 08 January 2026 00:38:01 +0000 (0:00:00.108) 0:00:01.523 ****** 2026-01-08 00:38:06.483733 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:38:06.483742 | orchestrator | 2026-01-08 00:38:06.483750 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-08 00:38:06.483759 | orchestrator | Thursday 08 January 2026 00:38:02 +0000 (0:00:00.702) 0:00:02.225 ****** 2026-01-08 00:38:06.483768 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:38:06.483777 | orchestrator | 2026-01-08 00:38:06.483791 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-08 00:38:06.483805 | orchestrator | 2026-01-08 00:38:06.483828 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-08 00:38:06.483845 | orchestrator | Thursday 08 January 2026 00:38:02 +0000 (0:00:00.116) 0:00:02.342 ****** 2026-01-08 00:38:06.483888 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:38:06.483903 | orchestrator | 2026-01-08 00:38:06.483918 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-08 00:38:06.483932 | orchestrator | Thursday 08 January 2026 00:38:02 +0000 (0:00:00.207) 0:00:02.549 ****** 2026-01-08 00:38:06.483947 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:38:06.483960 | orchestrator | 2026-01-08 00:38:06.483976 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-08 00:38:06.483993 | orchestrator | Thursday 08 January 2026 00:38:03 +0000 (0:00:00.704) 0:00:03.254 ****** 2026-01-08 00:38:06.484009 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:38:06.484024 | orchestrator | 2026-01-08 00:38:06.484035 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-08 00:38:06.484046 | orchestrator | 2026-01-08 00:38:06.484058 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-08 00:38:06.484074 | orchestrator | Thursday 08 January 2026 00:38:03 +0000 (0:00:00.132) 0:00:03.387 ****** 2026-01-08 00:38:06.484099 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:38:06.484114 | orchestrator | 2026-01-08 00:38:06.484129 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-08 00:38:06.484145 | orchestrator | Thursday 08 January 2026 00:38:03 +0000 (0:00:00.110) 0:00:03.497 ****** 2026-01-08 00:38:06.484161 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:38:06.484176 | orchestrator | 2026-01-08 00:38:06.484188 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-08 00:38:06.484199 | orchestrator | Thursday 08 January 2026 00:38:04 +0000 (0:00:00.686) 0:00:04.183 ****** 2026-01-08 00:38:06.484209 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:38:06.484219 | orchestrator | 2026-01-08 00:38:06.484253 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-08 00:38:06.484263 | orchestrator | 2026-01-08 00:38:06.484272 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-08 00:38:06.484280 | orchestrator | Thursday 08 January 2026 00:38:04 +0000 (0:00:00.099) 0:00:04.283 ****** 2026-01-08 00:38:06.484289 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:38:06.484298 | orchestrator | 2026-01-08 00:38:06.484307 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-08 00:38:06.484315 | orchestrator | Thursday 08 January 2026 00:38:04 +0000 (0:00:00.102) 0:00:04.385 ****** 2026-01-08 00:38:06.484325 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:38:06.484334 | orchestrator | 2026-01-08 00:38:06.484352 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-08 00:38:06.484362 | orchestrator | Thursday 08 January 2026 00:38:05 +0000 (0:00:00.707) 0:00:05.093 ****** 2026-01-08 00:38:06.484381 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:38:06.484390 | orchestrator | 2026-01-08 00:38:06.484414 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-08 00:38:06.484423 | orchestrator | 2026-01-08 00:38:06.484431 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-08 00:38:06.484440 | orchestrator | Thursday 08 January 2026 00:38:05 +0000 (0:00:00.122) 0:00:05.216 ****** 2026-01-08 00:38:06.484449 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:38:06.484459 | orchestrator | 2026-01-08 00:38:06.484470 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-08 00:38:06.484481 | orchestrator | Thursday 08 January 2026 00:38:05 +0000 (0:00:00.136) 0:00:05.353 ****** 2026-01-08 00:38:06.484492 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:38:06.484503 | orchestrator | 2026-01-08 00:38:06.484514 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-08 00:38:06.484525 | orchestrator | Thursday 08 January 2026 00:38:06 +0000 (0:00:00.705) 0:00:06.058 ****** 2026-01-08 00:38:06.484557 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:38:06.484568 | orchestrator | 2026-01-08 00:38:06.484579 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:38:06.484591 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:38:06.484603 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:38:06.484614 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:38:06.484625 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:38:06.484636 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:38:06.484647 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:38:06.484658 | orchestrator | 2026-01-08 00:38:06.484669 | orchestrator | 2026-01-08 00:38:06.484681 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:38:06.484692 | orchestrator | Thursday 08 January 2026 00:38:06 +0000 (0:00:00.050) 0:00:06.109 ****** 2026-01-08 00:38:06.484703 | orchestrator | =============================================================================== 2026-01-08 00:38:06.484714 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.50s 2026-01-08 00:38:06.484725 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.76s 2026-01-08 00:38:06.484736 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2026-01-08 00:38:06.782474 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-08 00:38:18.860067 | orchestrator | 2026-01-08 00:38:18 | INFO  | Task dd5e0c49-dfde-4cdd-9b8d-577a02b086b3 (wait-for-connection) was prepared for execution. 2026-01-08 00:38:18.860242 | orchestrator | 2026-01-08 00:38:18 | INFO  | It takes a moment until task dd5e0c49-dfde-4cdd-9b8d-577a02b086b3 (wait-for-connection) has been started and output is visible here. 2026-01-08 00:38:35.431618 | orchestrator | 2026-01-08 00:38:35.431735 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-08 00:38:35.431753 | orchestrator | 2026-01-08 00:38:35.431765 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-08 00:38:35.431777 | orchestrator | Thursday 08 January 2026 00:38:23 +0000 (0:00:00.245) 0:00:00.245 ****** 2026-01-08 00:38:35.431879 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:38:35.431895 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:38:35.431906 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:38:35.431917 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:38:35.431930 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:38:35.431949 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:38:35.431968 | orchestrator | 2026-01-08 00:38:35.431986 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:38:35.432004 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:38:35.432027 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:38:35.432047 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:38:35.432066 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:38:35.432085 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:38:35.432097 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:38:35.432108 | orchestrator | 2026-01-08 00:38:35.432119 | orchestrator | 2026-01-08 00:38:35.432130 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:38:35.432141 | orchestrator | Thursday 08 January 2026 00:38:35 +0000 (0:00:11.769) 0:00:12.015 ****** 2026-01-08 00:38:35.432171 | orchestrator | =============================================================================== 2026-01-08 00:38:35.432190 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.77s 2026-01-08 00:38:35.769111 | orchestrator | + osism apply hddtemp 2026-01-08 00:38:47.959179 | orchestrator | 2026-01-08 00:38:47 | INFO  | Task f41828e6-62d6-493a-bbcd-cf78681670b0 (hddtemp) was prepared for execution. 2026-01-08 00:38:47.959309 | orchestrator | 2026-01-08 00:38:47 | INFO  | It takes a moment until task f41828e6-62d6-493a-bbcd-cf78681670b0 (hddtemp) has been started and output is visible here. 2026-01-08 00:39:16.651539 | orchestrator | 2026-01-08 00:39:16.651666 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-08 00:39:16.651682 | orchestrator | 2026-01-08 00:39:16.651693 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-08 00:39:16.651704 | orchestrator | Thursday 08 January 2026 00:38:52 +0000 (0:00:00.262) 0:00:00.262 ****** 2026-01-08 00:39:16.651715 | orchestrator | ok: [testbed-manager] 2026-01-08 00:39:16.651726 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:39:16.651736 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:39:16.651784 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:39:16.651794 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:39:16.651804 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:39:16.651814 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:39:16.651824 | orchestrator | 2026-01-08 00:39:16.651834 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-08 00:39:16.651844 | orchestrator | Thursday 08 January 2026 00:38:53 +0000 (0:00:00.790) 0:00:01.053 ****** 2026-01-08 00:39:16.651856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:39:16.651868 | orchestrator | 2026-01-08 00:39:16.651878 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-08 00:39:16.651889 | orchestrator | Thursday 08 January 2026 00:38:54 +0000 (0:00:01.284) 0:00:02.337 ****** 2026-01-08 00:39:16.651899 | orchestrator | ok: [testbed-manager] 2026-01-08 00:39:16.651955 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:39:16.651966 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:39:16.651975 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:39:16.651984 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:39:16.651994 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:39:16.652003 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:39:16.652013 | orchestrator | 2026-01-08 00:39:16.652023 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-08 00:39:16.652032 | orchestrator | Thursday 08 January 2026 00:38:56 +0000 (0:00:02.102) 0:00:04.439 ****** 2026-01-08 00:39:16.652042 | orchestrator | changed: [testbed-manager] 2026-01-08 00:39:16.652052 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:39:16.652062 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:39:16.652073 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:39:16.652083 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:39:16.652096 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:39:16.652107 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:39:16.652118 | orchestrator | 2026-01-08 00:39:16.652130 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-08 00:39:16.652140 | orchestrator | Thursday 08 January 2026 00:38:57 +0000 (0:00:01.222) 0:00:05.661 ****** 2026-01-08 00:39:16.652149 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:39:16.652159 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:39:16.652168 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:39:16.652178 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:39:16.652187 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:39:16.652197 | orchestrator | ok: [testbed-manager] 2026-01-08 00:39:16.652206 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:39:16.652216 | orchestrator | 2026-01-08 00:39:16.652225 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-08 00:39:16.652235 | orchestrator | Thursday 08 January 2026 00:38:58 +0000 (0:00:01.162) 0:00:06.824 ****** 2026-01-08 00:39:16.652245 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:39:16.652254 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:39:16.652264 | orchestrator | changed: [testbed-manager] 2026-01-08 00:39:16.652273 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:39:16.652283 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:39:16.652292 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:39:16.652302 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:39:16.652311 | orchestrator | 2026-01-08 00:39:16.652321 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-08 00:39:16.652331 | orchestrator | Thursday 08 January 2026 00:38:59 +0000 (0:00:00.838) 0:00:07.663 ****** 2026-01-08 00:39:16.652340 | orchestrator | changed: [testbed-manager] 2026-01-08 00:39:16.652350 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:39:16.652359 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:39:16.652369 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:39:16.652378 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:39:16.652388 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:39:16.652397 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:39:16.652407 | orchestrator | 2026-01-08 00:39:16.652416 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-08 00:39:16.652426 | orchestrator | Thursday 08 January 2026 00:39:13 +0000 (0:00:13.340) 0:00:21.004 ****** 2026-01-08 00:39:16.652436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:39:16.652446 | orchestrator | 2026-01-08 00:39:16.652456 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-08 00:39:16.652466 | orchestrator | Thursday 08 January 2026 00:39:14 +0000 (0:00:01.243) 0:00:22.247 ****** 2026-01-08 00:39:16.652475 | orchestrator | changed: [testbed-manager] 2026-01-08 00:39:16.652485 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:39:16.652516 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:39:16.652527 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:39:16.652536 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:39:16.652546 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:39:16.652555 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:39:16.652565 | orchestrator | 2026-01-08 00:39:16.652574 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:39:16.652584 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:39:16.652614 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:39:16.652625 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:39:16.652634 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:39:16.652644 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:39:16.652654 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:39:16.652664 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:39:16.652673 | orchestrator | 2026-01-08 00:39:16.652683 | orchestrator | 2026-01-08 00:39:16.652693 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:39:16.652703 | orchestrator | Thursday 08 January 2026 00:39:16 +0000 (0:00:01.925) 0:00:24.172 ****** 2026-01-08 00:39:16.652713 | orchestrator | =============================================================================== 2026-01-08 00:39:16.652723 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.34s 2026-01-08 00:39:16.652732 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.10s 2026-01-08 00:39:16.652766 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2026-01-08 00:39:16.652783 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.28s 2026-01-08 00:39:16.652800 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.24s 2026-01-08 00:39:16.652816 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2026-01-08 00:39:16.652831 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.16s 2026-01-08 00:39:16.652845 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2026-01-08 00:39:16.652855 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.79s 2026-01-08 00:39:16.953569 | orchestrator | ++ semver latest 7.1.1 2026-01-08 00:39:17.015798 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-08 00:39:17.015896 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-08 00:39:17.015913 | orchestrator | + sudo systemctl restart manager.service 2026-01-08 00:39:30.657066 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-08 00:39:30.657169 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-08 00:39:30.657185 | orchestrator | + local max_attempts=60 2026-01-08 00:39:30.657197 | orchestrator | + local name=ceph-ansible 2026-01-08 00:39:30.657208 | orchestrator | + local attempt_num=1 2026-01-08 00:39:30.657220 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:39:30.692388 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:39:30.692469 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:39:30.692481 | orchestrator | + sleep 5 2026-01-08 00:39:35.697638 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:39:35.778431 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:39:35.779623 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:39:35.779655 | orchestrator | + sleep 5 2026-01-08 00:39:40.782003 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:39:40.817832 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:39:40.817940 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:39:40.817956 | orchestrator | + sleep 5 2026-01-08 00:39:45.824950 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:39:45.867303 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:39:45.867424 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:39:45.867449 | orchestrator | + sleep 5 2026-01-08 00:39:50.871942 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:39:50.913021 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:39:50.913114 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:39:50.913129 | orchestrator | + sleep 5 2026-01-08 00:39:55.917081 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:39:55.957193 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:39:55.957291 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:39:55.957307 | orchestrator | + sleep 5 2026-01-08 00:40:00.961500 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:40:01.005164 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:01.005312 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:40:01.005326 | orchestrator | + sleep 5 2026-01-08 00:40:06.010244 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:40:06.038964 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:06.039016 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:40:06.039022 | orchestrator | + sleep 5 2026-01-08 00:40:11.042626 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:40:11.103209 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:11.103273 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:40:11.103279 | orchestrator | + sleep 5 2026-01-08 00:40:16.104903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:40:16.146897 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:16.147016 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:40:16.147040 | orchestrator | + sleep 5 2026-01-08 00:40:21.151553 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:40:21.190990 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:21.191078 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:40:21.191093 | orchestrator | + sleep 5 2026-01-08 00:40:26.195570 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:40:26.232584 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:26.232725 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:40:26.232752 | orchestrator | + sleep 5 2026-01-08 00:40:31.236681 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:40:31.283095 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:31.283202 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-08 00:40:31.283223 | orchestrator | + sleep 5 2026-01-08 00:40:36.288624 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-08 00:40:36.328884 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:36.328977 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-08 00:40:36.328992 | orchestrator | + local max_attempts=60 2026-01-08 00:40:36.329004 | orchestrator | + local name=kolla-ansible 2026-01-08 00:40:36.329016 | orchestrator | + local attempt_num=1 2026-01-08 00:40:36.329152 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-08 00:40:36.363924 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:36.364032 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-08 00:40:36.364047 | orchestrator | + local max_attempts=60 2026-01-08 00:40:36.364059 | orchestrator | + local name=osism-ansible 2026-01-08 00:40:36.364071 | orchestrator | + local attempt_num=1 2026-01-08 00:40:36.364983 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-08 00:40:36.395539 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-08 00:40:36.395787 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-08 00:40:36.395821 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-08 00:40:36.560747 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-08 00:40:36.730811 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-08 00:40:36.916022 | orchestrator | ARA in osism-ansible already disabled. 2026-01-08 00:40:37.085161 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-08 00:40:37.085275 | orchestrator | + osism apply gather-facts 2026-01-08 00:40:49.247185 | orchestrator | 2026-01-08 00:40:49 | INFO  | Task 203f851f-cdb6-4812-9d1f-16fcedacbb72 (gather-facts) was prepared for execution. 2026-01-08 00:40:49.247276 | orchestrator | 2026-01-08 00:40:49 | INFO  | It takes a moment until task 203f851f-cdb6-4812-9d1f-16fcedacbb72 (gather-facts) has been started and output is visible here. 2026-01-08 00:41:03.107530 | orchestrator | 2026-01-08 00:41:03.107696 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-08 00:41:03.107720 | orchestrator | 2026-01-08 00:41:03.107738 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-08 00:41:03.107754 | orchestrator | Thursday 08 January 2026 00:40:53 +0000 (0:00:00.215) 0:00:00.215 ****** 2026-01-08 00:41:03.107770 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:41:03.107788 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:41:03.107804 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:41:03.107820 | orchestrator | ok: [testbed-manager] 2026-01-08 00:41:03.107835 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:41:03.107850 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:41:03.107866 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:41:03.107882 | orchestrator | 2026-01-08 00:41:03.107898 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-08 00:41:03.107914 | orchestrator | 2026-01-08 00:41:03.107931 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-08 00:41:03.107947 | orchestrator | Thursday 08 January 2026 00:41:02 +0000 (0:00:08.635) 0:00:08.850 ****** 2026-01-08 00:41:03.107962 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:41:03.107978 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:41:03.107994 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:41:03.108010 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:41:03.108027 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:41:03.108044 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:41:03.108061 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:41:03.108079 | orchestrator | 2026-01-08 00:41:03.108096 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:41:03.108115 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:41:03.108133 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:41:03.108150 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:41:03.108166 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:41:03.108182 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:41:03.108197 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:41:03.108212 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:41:03.108227 | orchestrator | 2026-01-08 00:41:03.108242 | orchestrator | 2026-01-08 00:41:03.108259 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:41:03.108308 | orchestrator | Thursday 08 January 2026 00:41:02 +0000 (0:00:00.503) 0:00:09.354 ****** 2026-01-08 00:41:03.108324 | orchestrator | =============================================================================== 2026-01-08 00:41:03.108341 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.64s 2026-01-08 00:41:03.108357 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-01-08 00:41:03.429176 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-08 00:41:03.445389 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-08 00:41:03.459071 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-08 00:41:03.478508 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-08 00:41:03.489319 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-08 00:41:03.500117 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-08 00:41:03.512067 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-08 00:41:03.526219 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-08 00:41:03.537796 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-08 00:41:03.558340 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-08 00:41:03.571687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-08 00:41:03.588420 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-08 00:41:03.601563 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-08 00:41:03.619880 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-08 00:41:03.640769 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-08 00:41:03.656566 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-08 00:41:03.677187 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-08 00:41:03.694330 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-08 00:41:03.707145 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-08 00:41:03.716709 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-08 00:41:03.726912 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-08 00:41:04.218805 | orchestrator | ok: Runtime: 0:25:26.931743 2026-01-08 00:41:04.336069 | 2026-01-08 00:41:04.336273 | TASK [Deploy services] 2026-01-08 00:41:04.870757 | orchestrator | skipping: Conditional result was False 2026-01-08 00:41:04.888644 | 2026-01-08 00:41:04.888831 | TASK [Deploy in a nutshell] 2026-01-08 00:41:05.605752 | orchestrator | 2026-01-08 00:41:05.605990 | orchestrator | # PULL IMAGES 2026-01-08 00:41:05.606070 | orchestrator | + set -e 2026-01-08 00:41:05.606095 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-08 00:41:05.606116 | orchestrator | ++ export INTERACTIVE=false 2026-01-08 00:41:05.606131 | orchestrator | ++ INTERACTIVE=false 2026-01-08 00:41:05.606145 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-08 00:41:05.606194 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-08 00:41:05.606219 | orchestrator | + source /opt/manager-vars.sh 2026-01-08 00:41:05.606234 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-08 00:41:05.606253 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-08 00:41:05.606264 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-08 00:41:05.606282 | orchestrator | ++ CEPH_VERSION=reef 2026-01-08 00:41:05.606294 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-08 00:41:05.606313 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-08 00:41:05.606324 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-08 00:41:05.606340 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-08 00:41:05.606351 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-08 00:41:05.606364 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-08 00:41:05.606375 | orchestrator | ++ export ARA=false 2026-01-08 00:41:05.606387 | orchestrator | ++ ARA=false 2026-01-08 00:41:05.606398 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-08 00:41:05.606409 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-08 00:41:05.606420 | orchestrator | ++ export TEMPEST=true 2026-01-08 00:41:05.606430 | orchestrator | ++ TEMPEST=true 2026-01-08 00:41:05.606441 | orchestrator | ++ export IS_ZUUL=true 2026-01-08 00:41:05.606452 | orchestrator | ++ IS_ZUUL=true 2026-01-08 00:41:05.606463 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 00:41:05.606474 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 00:41:05.606485 | orchestrator | ++ export EXTERNAL_API=false 2026-01-08 00:41:05.606496 | orchestrator | ++ EXTERNAL_API=false 2026-01-08 00:41:05.606507 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-08 00:41:05.606518 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-08 00:41:05.606529 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-08 00:41:05.606540 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-08 00:41:05.606552 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-08 00:41:05.606563 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-08 00:41:05.606574 | orchestrator | + echo 2026-01-08 00:41:05.606585 | orchestrator | + echo '# PULL IMAGES' 2026-01-08 00:41:05.606596 | orchestrator | + echo 2026-01-08 00:41:05.606647 | orchestrator | 2026-01-08 00:41:05.606970 | orchestrator | ++ semver latest 7.0.0 2026-01-08 00:41:05.673297 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-08 00:41:05.673471 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-08 00:41:05.673489 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-08 00:41:07.656660 | orchestrator | 2026-01-08 00:41:07 | INFO  | Trying to run play pull-images in environment custom 2026-01-08 00:41:17.757851 | orchestrator | 2026-01-08 00:41:17 | INFO  | Task 916d68e7-3452-4f9c-b6ea-db5e3a0e4b5e (pull-images) was prepared for execution. 2026-01-08 00:41:17.758080 | orchestrator | 2026-01-08 00:41:17 | INFO  | Task 916d68e7-3452-4f9c-b6ea-db5e3a0e4b5e is running in background. No more output. Check ARA for logs. 2026-01-08 00:41:20.180982 | orchestrator | 2026-01-08 00:41:20 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-08 00:41:30.332743 | orchestrator | 2026-01-08 00:41:30 | INFO  | Task c867c625-f2f9-4ec1-b5e6-20c953cd71f4 (wipe-partitions) was prepared for execution. 2026-01-08 00:41:30.332917 | orchestrator | 2026-01-08 00:41:30 | INFO  | It takes a moment until task c867c625-f2f9-4ec1-b5e6-20c953cd71f4 (wipe-partitions) has been started and output is visible here. 2026-01-08 00:41:43.004639 | orchestrator | 2026-01-08 00:41:43.004719 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-08 00:41:43.004725 | orchestrator | 2026-01-08 00:41:43.004729 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-08 00:41:43.004737 | orchestrator | Thursday 08 January 2026 00:41:34 +0000 (0:00:00.127) 0:00:00.127 ****** 2026-01-08 00:41:43.004742 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:41:43.004747 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:41:43.004751 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:41:43.004756 | orchestrator | 2026-01-08 00:41:43.004760 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-08 00:41:43.004781 | orchestrator | Thursday 08 January 2026 00:41:35 +0000 (0:00:00.600) 0:00:00.727 ****** 2026-01-08 00:41:43.004785 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:41:43.004789 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:41:43.004796 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:41:43.004800 | orchestrator | 2026-01-08 00:41:43.004804 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-08 00:41:43.004808 | orchestrator | Thursday 08 January 2026 00:41:35 +0000 (0:00:00.387) 0:00:01.115 ****** 2026-01-08 00:41:43.004812 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:41:43.004817 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:41:43.004821 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:41:43.004825 | orchestrator | 2026-01-08 00:41:43.004829 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-08 00:41:43.004833 | orchestrator | Thursday 08 January 2026 00:41:36 +0000 (0:00:00.590) 0:00:01.705 ****** 2026-01-08 00:41:43.004837 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:41:43.004840 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:41:43.004844 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:41:43.004848 | orchestrator | 2026-01-08 00:41:43.004852 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-08 00:41:43.004856 | orchestrator | Thursday 08 January 2026 00:41:36 +0000 (0:00:00.275) 0:00:01.981 ****** 2026-01-08 00:41:43.004860 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-08 00:41:43.004866 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-08 00:41:43.004870 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-08 00:41:43.004873 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-08 00:41:43.004877 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-08 00:41:43.004881 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-08 00:41:43.004885 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-08 00:41:43.004889 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-08 00:41:43.004892 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-08 00:41:43.004896 | orchestrator | 2026-01-08 00:41:43.004900 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-08 00:41:43.004904 | orchestrator | Thursday 08 January 2026 00:41:37 +0000 (0:00:01.199) 0:00:03.180 ****** 2026-01-08 00:41:43.004908 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-08 00:41:43.004912 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-08 00:41:43.004916 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-08 00:41:43.004920 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-08 00:41:43.004923 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-08 00:41:43.004927 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-08 00:41:43.004931 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-08 00:41:43.004934 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-08 00:41:43.004938 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-08 00:41:43.004942 | orchestrator | 2026-01-08 00:41:43.004946 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-08 00:41:43.004950 | orchestrator | Thursday 08 January 2026 00:41:39 +0000 (0:00:01.532) 0:00:04.713 ****** 2026-01-08 00:41:43.004953 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-08 00:41:43.004957 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-08 00:41:43.004961 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-08 00:41:43.004965 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-08 00:41:43.004968 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-08 00:41:43.004972 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-08 00:41:43.004976 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-08 00:41:43.004984 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-08 00:41:43.004991 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-08 00:41:43.004995 | orchestrator | 2026-01-08 00:41:43.004999 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-08 00:41:43.005002 | orchestrator | Thursday 08 January 2026 00:41:41 +0000 (0:00:02.093) 0:00:06.806 ****** 2026-01-08 00:41:43.005006 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:41:43.005010 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:41:43.005014 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:41:43.005067 | orchestrator | 2026-01-08 00:41:43.005074 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-08 00:41:43.005080 | orchestrator | Thursday 08 January 2026 00:41:42 +0000 (0:00:00.613) 0:00:07.420 ****** 2026-01-08 00:41:43.005085 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:41:43.005092 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:41:43.005097 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:41:43.005103 | orchestrator | 2026-01-08 00:41:43.005109 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:41:43.005117 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:41:43.005125 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:41:43.005145 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:41:43.005150 | orchestrator | 2026-01-08 00:41:43.005154 | orchestrator | 2026-01-08 00:41:43.005158 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:41:43.005162 | orchestrator | Thursday 08 January 2026 00:41:42 +0000 (0:00:00.643) 0:00:08.063 ****** 2026-01-08 00:41:43.005167 | orchestrator | =============================================================================== 2026-01-08 00:41:43.005171 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.09s 2026-01-08 00:41:43.005176 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.53s 2026-01-08 00:41:43.005181 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2026-01-08 00:41:43.005185 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-01-08 00:41:43.005190 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-01-08 00:41:43.005194 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2026-01-08 00:41:43.005199 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-01-08 00:41:43.005203 | orchestrator | Remove all rook related logical devices --------------------------------- 0.39s 2026-01-08 00:41:43.005208 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2026-01-08 00:41:55.406294 | orchestrator | 2026-01-08 00:41:55 | INFO  | Task 28695069-9410-4372-83ea-8afdea0dc06f (facts) was prepared for execution. 2026-01-08 00:41:55.406418 | orchestrator | 2026-01-08 00:41:55 | INFO  | It takes a moment until task 28695069-9410-4372-83ea-8afdea0dc06f (facts) has been started and output is visible here. 2026-01-08 00:42:07.888313 | orchestrator | 2026-01-08 00:42:07.888449 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-08 00:42:07.888475 | orchestrator | 2026-01-08 00:42:07.888494 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-08 00:42:07.888512 | orchestrator | Thursday 08 January 2026 00:41:59 +0000 (0:00:00.265) 0:00:00.265 ****** 2026-01-08 00:42:07.888531 | orchestrator | ok: [testbed-manager] 2026-01-08 00:42:07.888595 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:42:07.888614 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:42:07.888662 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:42:07.888679 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:42:07.888695 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:42:07.888711 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:42:07.888726 | orchestrator | 2026-01-08 00:42:07.888742 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-08 00:42:07.888759 | orchestrator | Thursday 08 January 2026 00:42:00 +0000 (0:00:01.099) 0:00:01.365 ****** 2026-01-08 00:42:07.888775 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:42:07.888792 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:42:07.888808 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:42:07.888823 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:42:07.888840 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:07.888856 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:07.888873 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:07.888888 | orchestrator | 2026-01-08 00:42:07.888904 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-08 00:42:07.888919 | orchestrator | 2026-01-08 00:42:07.888957 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-08 00:42:07.888975 | orchestrator | Thursday 08 January 2026 00:42:02 +0000 (0:00:01.269) 0:00:02.635 ****** 2026-01-08 00:42:07.888992 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:42:07.889009 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:42:07.889027 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:42:07.889043 | orchestrator | ok: [testbed-manager] 2026-01-08 00:42:07.889059 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:42:07.889076 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:42:07.889094 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:42:07.889112 | orchestrator | 2026-01-08 00:42:07.889130 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-08 00:42:07.889148 | orchestrator | 2026-01-08 00:42:07.889165 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-08 00:42:07.889183 | orchestrator | Thursday 08 January 2026 00:42:06 +0000 (0:00:04.850) 0:00:07.485 ****** 2026-01-08 00:42:07.889200 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:42:07.889215 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:42:07.889231 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:42:07.889248 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:42:07.889263 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:07.889280 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:07.889297 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:07.889312 | orchestrator | 2026-01-08 00:42:07.889329 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:42:07.889346 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:42:07.889366 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:42:07.889382 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:42:07.889398 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:42:07.889414 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:42:07.889430 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:42:07.889446 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:42:07.889463 | orchestrator | 2026-01-08 00:42:07.889498 | orchestrator | 2026-01-08 00:42:07.889516 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:42:07.889533 | orchestrator | Thursday 08 January 2026 00:42:07 +0000 (0:00:00.535) 0:00:08.021 ****** 2026-01-08 00:42:07.889578 | orchestrator | =============================================================================== 2026-01-08 00:42:07.889597 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2026-01-08 00:42:07.889613 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2026-01-08 00:42:07.889631 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2026-01-08 00:42:07.889647 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-01-08 00:42:10.337838 | orchestrator | 2026-01-08 00:42:10 | INFO  | Task 32f96fd9-4682-4161-a605-192ecaa763f8 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-08 00:42:10.337929 | orchestrator | 2026-01-08 00:42:10 | INFO  | It takes a moment until task 32f96fd9-4682-4161-a605-192ecaa763f8 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-08 00:42:22.205846 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-08 00:42:22.205956 | orchestrator | 2.16.14 2026-01-08 00:42:22.205979 | orchestrator | 2026-01-08 00:42:22.205996 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-08 00:42:22.206013 | orchestrator | 2026-01-08 00:42:22.206075 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-08 00:42:22.206092 | orchestrator | Thursday 08 January 2026 00:42:14 +0000 (0:00:00.333) 0:00:00.333 ****** 2026-01-08 00:42:22.206109 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 00:42:22.206124 | orchestrator | 2026-01-08 00:42:22.206140 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-08 00:42:22.206155 | orchestrator | Thursday 08 January 2026 00:42:15 +0000 (0:00:00.248) 0:00:00.582 ****** 2026-01-08 00:42:22.206172 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:42:22.206188 | orchestrator | 2026-01-08 00:42:22.206204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.206220 | orchestrator | Thursday 08 January 2026 00:42:15 +0000 (0:00:00.231) 0:00:00.813 ****** 2026-01-08 00:42:22.206237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-08 00:42:22.206267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-08 00:42:22.206283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-08 00:42:22.206298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-08 00:42:22.206315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-08 00:42:22.206331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-08 00:42:22.206348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-08 00:42:22.206365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-08 00:42:22.206383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-08 00:42:22.206400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-08 00:42:22.206417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-08 00:42:22.206435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-08 00:42:22.206452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-08 00:42:22.206470 | orchestrator | 2026-01-08 00:42:22.206487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.206533 | orchestrator | Thursday 08 January 2026 00:42:15 +0000 (0:00:00.475) 0:00:01.289 ****** 2026-01-08 00:42:22.206580 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.206594 | orchestrator | 2026-01-08 00:42:22.206610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.206625 | orchestrator | Thursday 08 January 2026 00:42:16 +0000 (0:00:00.195) 0:00:01.484 ****** 2026-01-08 00:42:22.206640 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.206655 | orchestrator | 2026-01-08 00:42:22.206670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.206687 | orchestrator | Thursday 08 January 2026 00:42:16 +0000 (0:00:00.209) 0:00:01.693 ****** 2026-01-08 00:42:22.206705 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.206720 | orchestrator | 2026-01-08 00:42:22.206734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.206753 | orchestrator | Thursday 08 January 2026 00:42:16 +0000 (0:00:00.201) 0:00:01.895 ****** 2026-01-08 00:42:22.206767 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.206781 | orchestrator | 2026-01-08 00:42:22.206795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.206809 | orchestrator | Thursday 08 January 2026 00:42:16 +0000 (0:00:00.200) 0:00:02.095 ****** 2026-01-08 00:42:22.206823 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.206838 | orchestrator | 2026-01-08 00:42:22.206852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.206866 | orchestrator | Thursday 08 January 2026 00:42:16 +0000 (0:00:00.215) 0:00:02.310 ****** 2026-01-08 00:42:22.206880 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.206894 | orchestrator | 2026-01-08 00:42:22.206908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.206923 | orchestrator | Thursday 08 January 2026 00:42:17 +0000 (0:00:00.212) 0:00:02.523 ****** 2026-01-08 00:42:22.206937 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.206950 | orchestrator | 2026-01-08 00:42:22.206964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.206978 | orchestrator | Thursday 08 January 2026 00:42:17 +0000 (0:00:00.195) 0:00:02.719 ****** 2026-01-08 00:42:22.206992 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.207007 | orchestrator | 2026-01-08 00:42:22.207021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.207035 | orchestrator | Thursday 08 January 2026 00:42:17 +0000 (0:00:00.205) 0:00:02.924 ****** 2026-01-08 00:42:22.207049 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2) 2026-01-08 00:42:22.207066 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2) 2026-01-08 00:42:22.207080 | orchestrator | 2026-01-08 00:42:22.207094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.207136 | orchestrator | Thursday 08 January 2026 00:42:17 +0000 (0:00:00.404) 0:00:03.328 ****** 2026-01-08 00:42:22.207154 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82) 2026-01-08 00:42:22.207179 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82) 2026-01-08 00:42:22.207194 | orchestrator | 2026-01-08 00:42:22.207209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.207223 | orchestrator | Thursday 08 January 2026 00:42:18 +0000 (0:00:00.631) 0:00:03.960 ****** 2026-01-08 00:42:22.207237 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea) 2026-01-08 00:42:22.207251 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea) 2026-01-08 00:42:22.207266 | orchestrator | 2026-01-08 00:42:22.207280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.207310 | orchestrator | Thursday 08 January 2026 00:42:19 +0000 (0:00:00.643) 0:00:04.603 ****** 2026-01-08 00:42:22.207324 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb) 2026-01-08 00:42:22.207339 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb) 2026-01-08 00:42:22.207352 | orchestrator | 2026-01-08 00:42:22.207366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:22.207380 | orchestrator | Thursday 08 January 2026 00:42:20 +0000 (0:00:00.856) 0:00:05.459 ****** 2026-01-08 00:42:22.207394 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-08 00:42:22.207408 | orchestrator | 2026-01-08 00:42:22.207422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:22.207436 | orchestrator | Thursday 08 January 2026 00:42:20 +0000 (0:00:00.332) 0:00:05.791 ****** 2026-01-08 00:42:22.207450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-08 00:42:22.207464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-08 00:42:22.207478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-08 00:42:22.207493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-08 00:42:22.207507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-08 00:42:22.207521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-08 00:42:22.207577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-08 00:42:22.207595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-08 00:42:22.207611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-08 00:42:22.207625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-08 00:42:22.207639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-08 00:42:22.207652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-08 00:42:22.207667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-08 00:42:22.207680 | orchestrator | 2026-01-08 00:42:22.207695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:22.207709 | orchestrator | Thursday 08 January 2026 00:42:20 +0000 (0:00:00.394) 0:00:06.186 ****** 2026-01-08 00:42:22.207723 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.207737 | orchestrator | 2026-01-08 00:42:22.207751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:22.207764 | orchestrator | Thursday 08 January 2026 00:42:20 +0000 (0:00:00.202) 0:00:06.388 ****** 2026-01-08 00:42:22.207778 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.207792 | orchestrator | 2026-01-08 00:42:22.207805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:22.207819 | orchestrator | Thursday 08 January 2026 00:42:21 +0000 (0:00:00.200) 0:00:06.588 ****** 2026-01-08 00:42:22.207834 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.207849 | orchestrator | 2026-01-08 00:42:22.207864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:22.207880 | orchestrator | Thursday 08 January 2026 00:42:21 +0000 (0:00:00.202) 0:00:06.791 ****** 2026-01-08 00:42:22.207896 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.207912 | orchestrator | 2026-01-08 00:42:22.207926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:22.207939 | orchestrator | Thursday 08 January 2026 00:42:21 +0000 (0:00:00.202) 0:00:06.993 ****** 2026-01-08 00:42:22.207967 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.207981 | orchestrator | 2026-01-08 00:42:22.207997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:22.208011 | orchestrator | Thursday 08 January 2026 00:42:21 +0000 (0:00:00.219) 0:00:07.213 ****** 2026-01-08 00:42:22.208024 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.208038 | orchestrator | 2026-01-08 00:42:22.208052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:22.208065 | orchestrator | Thursday 08 January 2026 00:42:21 +0000 (0:00:00.212) 0:00:07.426 ****** 2026-01-08 00:42:22.208079 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:22.208092 | orchestrator | 2026-01-08 00:42:22.208124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:29.903621 | orchestrator | Thursday 08 January 2026 00:42:22 +0000 (0:00:00.208) 0:00:07.634 ****** 2026-01-08 00:42:29.903725 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.903753 | orchestrator | 2026-01-08 00:42:29.903774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:29.903795 | orchestrator | Thursday 08 January 2026 00:42:22 +0000 (0:00:00.196) 0:00:07.830 ****** 2026-01-08 00:42:29.903814 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-08 00:42:29.903857 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-08 00:42:29.903879 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-08 00:42:29.903898 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-08 00:42:29.903916 | orchestrator | 2026-01-08 00:42:29.903928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:29.903940 | orchestrator | Thursday 08 January 2026 00:42:23 +0000 (0:00:00.999) 0:00:08.829 ****** 2026-01-08 00:42:29.903950 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.903961 | orchestrator | 2026-01-08 00:42:29.903972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:29.903983 | orchestrator | Thursday 08 January 2026 00:42:23 +0000 (0:00:00.212) 0:00:09.042 ****** 2026-01-08 00:42:29.903994 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904005 | orchestrator | 2026-01-08 00:42:29.904016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:29.904027 | orchestrator | Thursday 08 January 2026 00:42:23 +0000 (0:00:00.216) 0:00:09.258 ****** 2026-01-08 00:42:29.904038 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904048 | orchestrator | 2026-01-08 00:42:29.904059 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:29.904070 | orchestrator | Thursday 08 January 2026 00:42:24 +0000 (0:00:00.212) 0:00:09.471 ****** 2026-01-08 00:42:29.904081 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904092 | orchestrator | 2026-01-08 00:42:29.904103 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-08 00:42:29.904114 | orchestrator | Thursday 08 January 2026 00:42:24 +0000 (0:00:00.205) 0:00:09.677 ****** 2026-01-08 00:42:29.904125 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-08 00:42:29.904136 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-08 00:42:29.904147 | orchestrator | 2026-01-08 00:42:29.904158 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-08 00:42:29.904169 | orchestrator | Thursday 08 January 2026 00:42:24 +0000 (0:00:00.196) 0:00:09.873 ****** 2026-01-08 00:42:29.904180 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904190 | orchestrator | 2026-01-08 00:42:29.904201 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-08 00:42:29.904212 | orchestrator | Thursday 08 January 2026 00:42:24 +0000 (0:00:00.162) 0:00:10.035 ****** 2026-01-08 00:42:29.904223 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904234 | orchestrator | 2026-01-08 00:42:29.904245 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-08 00:42:29.904278 | orchestrator | Thursday 08 January 2026 00:42:24 +0000 (0:00:00.141) 0:00:10.177 ****** 2026-01-08 00:42:29.904290 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904301 | orchestrator | 2026-01-08 00:42:29.904311 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-08 00:42:29.904322 | orchestrator | Thursday 08 January 2026 00:42:24 +0000 (0:00:00.141) 0:00:10.319 ****** 2026-01-08 00:42:29.904333 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:42:29.904344 | orchestrator | 2026-01-08 00:42:29.904355 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-08 00:42:29.904366 | orchestrator | Thursday 08 January 2026 00:42:25 +0000 (0:00:00.151) 0:00:10.470 ****** 2026-01-08 00:42:29.904377 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a2587794-ee13-56a9-b71d-149b2fd55b33'}}) 2026-01-08 00:42:29.904388 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '703f1367-865b-52a8-8f96-c728fe171d20'}}) 2026-01-08 00:42:29.904399 | orchestrator | 2026-01-08 00:42:29.904410 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-08 00:42:29.904421 | orchestrator | Thursday 08 January 2026 00:42:25 +0000 (0:00:00.163) 0:00:10.634 ****** 2026-01-08 00:42:29.904433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a2587794-ee13-56a9-b71d-149b2fd55b33'}})  2026-01-08 00:42:29.904455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '703f1367-865b-52a8-8f96-c728fe171d20'}})  2026-01-08 00:42:29.904466 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904476 | orchestrator | 2026-01-08 00:42:29.904487 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-08 00:42:29.904498 | orchestrator | Thursday 08 January 2026 00:42:25 +0000 (0:00:00.155) 0:00:10.789 ****** 2026-01-08 00:42:29.904509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a2587794-ee13-56a9-b71d-149b2fd55b33'}})  2026-01-08 00:42:29.904520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '703f1367-865b-52a8-8f96-c728fe171d20'}})  2026-01-08 00:42:29.904563 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904584 | orchestrator | 2026-01-08 00:42:29.904603 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-08 00:42:29.904621 | orchestrator | Thursday 08 January 2026 00:42:25 +0000 (0:00:00.347) 0:00:11.137 ****** 2026-01-08 00:42:29.904633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a2587794-ee13-56a9-b71d-149b2fd55b33'}})  2026-01-08 00:42:29.904664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '703f1367-865b-52a8-8f96-c728fe171d20'}})  2026-01-08 00:42:29.904675 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904686 | orchestrator | 2026-01-08 00:42:29.904697 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-08 00:42:29.904708 | orchestrator | Thursday 08 January 2026 00:42:25 +0000 (0:00:00.161) 0:00:11.298 ****** 2026-01-08 00:42:29.904719 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:42:29.904730 | orchestrator | 2026-01-08 00:42:29.904741 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-08 00:42:29.904751 | orchestrator | Thursday 08 January 2026 00:42:26 +0000 (0:00:00.149) 0:00:11.448 ****** 2026-01-08 00:42:29.904762 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:42:29.904773 | orchestrator | 2026-01-08 00:42:29.904784 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-08 00:42:29.904794 | orchestrator | Thursday 08 January 2026 00:42:26 +0000 (0:00:00.151) 0:00:11.599 ****** 2026-01-08 00:42:29.904805 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904816 | orchestrator | 2026-01-08 00:42:29.904826 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-08 00:42:29.904838 | orchestrator | Thursday 08 January 2026 00:42:26 +0000 (0:00:00.132) 0:00:11.731 ****** 2026-01-08 00:42:29.904857 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904869 | orchestrator | 2026-01-08 00:42:29.904880 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-08 00:42:29.904891 | orchestrator | Thursday 08 January 2026 00:42:26 +0000 (0:00:00.133) 0:00:11.865 ****** 2026-01-08 00:42:29.904902 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.904912 | orchestrator | 2026-01-08 00:42:29.904923 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-08 00:42:29.904934 | orchestrator | Thursday 08 January 2026 00:42:26 +0000 (0:00:00.143) 0:00:12.009 ****** 2026-01-08 00:42:29.904945 | orchestrator | ok: [testbed-node-3] => { 2026-01-08 00:42:29.904956 | orchestrator |  "ceph_osd_devices": { 2026-01-08 00:42:29.904967 | orchestrator |  "sdb": { 2026-01-08 00:42:29.904978 | orchestrator |  "osd_lvm_uuid": "a2587794-ee13-56a9-b71d-149b2fd55b33" 2026-01-08 00:42:29.904989 | orchestrator |  }, 2026-01-08 00:42:29.905000 | orchestrator |  "sdc": { 2026-01-08 00:42:29.905011 | orchestrator |  "osd_lvm_uuid": "703f1367-865b-52a8-8f96-c728fe171d20" 2026-01-08 00:42:29.905022 | orchestrator |  } 2026-01-08 00:42:29.905033 | orchestrator |  } 2026-01-08 00:42:29.905044 | orchestrator | } 2026-01-08 00:42:29.905055 | orchestrator | 2026-01-08 00:42:29.905066 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-08 00:42:29.905083 | orchestrator | Thursday 08 January 2026 00:42:26 +0000 (0:00:00.138) 0:00:12.147 ****** 2026-01-08 00:42:29.905094 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.905105 | orchestrator | 2026-01-08 00:42:29.905116 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-08 00:42:29.905130 | orchestrator | Thursday 08 January 2026 00:42:26 +0000 (0:00:00.133) 0:00:12.281 ****** 2026-01-08 00:42:29.905154 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.905179 | orchestrator | 2026-01-08 00:42:29.905197 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-08 00:42:29.905214 | orchestrator | Thursday 08 January 2026 00:42:26 +0000 (0:00:00.151) 0:00:12.433 ****** 2026-01-08 00:42:29.905232 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:42:29.905250 | orchestrator | 2026-01-08 00:42:29.905269 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-08 00:42:29.905288 | orchestrator | Thursday 08 January 2026 00:42:27 +0000 (0:00:00.146) 0:00:12.579 ****** 2026-01-08 00:42:29.905306 | orchestrator | changed: [testbed-node-3] => { 2026-01-08 00:42:29.905324 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-08 00:42:29.905336 | orchestrator |  "ceph_osd_devices": { 2026-01-08 00:42:29.905347 | orchestrator |  "sdb": { 2026-01-08 00:42:29.905357 | orchestrator |  "osd_lvm_uuid": "a2587794-ee13-56a9-b71d-149b2fd55b33" 2026-01-08 00:42:29.905368 | orchestrator |  }, 2026-01-08 00:42:29.905379 | orchestrator |  "sdc": { 2026-01-08 00:42:29.905390 | orchestrator |  "osd_lvm_uuid": "703f1367-865b-52a8-8f96-c728fe171d20" 2026-01-08 00:42:29.905401 | orchestrator |  } 2026-01-08 00:42:29.905412 | orchestrator |  }, 2026-01-08 00:42:29.905422 | orchestrator |  "lvm_volumes": [ 2026-01-08 00:42:29.905433 | orchestrator |  { 2026-01-08 00:42:29.905444 | orchestrator |  "data": "osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33", 2026-01-08 00:42:29.905455 | orchestrator |  "data_vg": "ceph-a2587794-ee13-56a9-b71d-149b2fd55b33" 2026-01-08 00:42:29.905466 | orchestrator |  }, 2026-01-08 00:42:29.905476 | orchestrator |  { 2026-01-08 00:42:29.905488 | orchestrator |  "data": "osd-block-703f1367-865b-52a8-8f96-c728fe171d20", 2026-01-08 00:42:29.905498 | orchestrator |  "data_vg": "ceph-703f1367-865b-52a8-8f96-c728fe171d20" 2026-01-08 00:42:29.905509 | orchestrator |  } 2026-01-08 00:42:29.905520 | orchestrator |  ] 2026-01-08 00:42:29.905591 | orchestrator |  } 2026-01-08 00:42:29.905636 | orchestrator | } 2026-01-08 00:42:29.905657 | orchestrator | 2026-01-08 00:42:29.905676 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-08 00:42:29.905696 | orchestrator | Thursday 08 January 2026 00:42:27 +0000 (0:00:00.410) 0:00:12.990 ****** 2026-01-08 00:42:29.905715 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 00:42:29.905735 | orchestrator | 2026-01-08 00:42:29.905755 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-08 00:42:29.905774 | orchestrator | 2026-01-08 00:42:29.905793 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-08 00:42:29.905814 | orchestrator | Thursday 08 January 2026 00:42:29 +0000 (0:00:01.838) 0:00:14.828 ****** 2026-01-08 00:42:29.905835 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-08 00:42:29.905853 | orchestrator | 2026-01-08 00:42:29.905874 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-08 00:42:29.905895 | orchestrator | Thursday 08 January 2026 00:42:29 +0000 (0:00:00.269) 0:00:15.097 ****** 2026-01-08 00:42:29.905915 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:42:29.905935 | orchestrator | 2026-01-08 00:42:29.905967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.011451 | orchestrator | Thursday 08 January 2026 00:42:29 +0000 (0:00:00.236) 0:00:15.334 ****** 2026-01-08 00:42:38.011604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-08 00:42:38.011622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-08 00:42:38.011634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-08 00:42:38.011645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-08 00:42:38.011656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-08 00:42:38.011668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-08 00:42:38.011679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-08 00:42:38.011710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-08 00:42:38.011721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-08 00:42:38.011733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-08 00:42:38.011744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-08 00:42:38.011759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-08 00:42:38.011771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-08 00:42:38.011783 | orchestrator | 2026-01-08 00:42:38.011795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.011828 | orchestrator | Thursday 08 January 2026 00:42:30 +0000 (0:00:00.387) 0:00:15.721 ****** 2026-01-08 00:42:38.011839 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.011851 | orchestrator | 2026-01-08 00:42:38.011863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.011874 | orchestrator | Thursday 08 January 2026 00:42:30 +0000 (0:00:00.200) 0:00:15.921 ****** 2026-01-08 00:42:38.011885 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.011896 | orchestrator | 2026-01-08 00:42:38.011907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.011918 | orchestrator | Thursday 08 January 2026 00:42:30 +0000 (0:00:00.193) 0:00:16.115 ****** 2026-01-08 00:42:38.011929 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.011940 | orchestrator | 2026-01-08 00:42:38.011959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012005 | orchestrator | Thursday 08 January 2026 00:42:30 +0000 (0:00:00.197) 0:00:16.312 ****** 2026-01-08 00:42:38.012025 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.012043 | orchestrator | 2026-01-08 00:42:38.012061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012081 | orchestrator | Thursday 08 January 2026 00:42:31 +0000 (0:00:00.216) 0:00:16.528 ****** 2026-01-08 00:42:38.012097 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.012114 | orchestrator | 2026-01-08 00:42:38.012132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012148 | orchestrator | Thursday 08 January 2026 00:42:31 +0000 (0:00:00.611) 0:00:17.140 ****** 2026-01-08 00:42:38.012168 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.012187 | orchestrator | 2026-01-08 00:42:38.012205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012225 | orchestrator | Thursday 08 January 2026 00:42:31 +0000 (0:00:00.208) 0:00:17.348 ****** 2026-01-08 00:42:38.012244 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.012263 | orchestrator | 2026-01-08 00:42:38.012281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012301 | orchestrator | Thursday 08 January 2026 00:42:32 +0000 (0:00:00.212) 0:00:17.560 ****** 2026-01-08 00:42:38.012321 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.012340 | orchestrator | 2026-01-08 00:42:38.012356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012367 | orchestrator | Thursday 08 January 2026 00:42:32 +0000 (0:00:00.223) 0:00:17.784 ****** 2026-01-08 00:42:38.012378 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1) 2026-01-08 00:42:38.012390 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1) 2026-01-08 00:42:38.012401 | orchestrator | 2026-01-08 00:42:38.012412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012423 | orchestrator | Thursday 08 January 2026 00:42:32 +0000 (0:00:00.417) 0:00:18.202 ****** 2026-01-08 00:42:38.012434 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b) 2026-01-08 00:42:38.012445 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b) 2026-01-08 00:42:38.012456 | orchestrator | 2026-01-08 00:42:38.012466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012477 | orchestrator | Thursday 08 January 2026 00:42:33 +0000 (0:00:00.451) 0:00:18.654 ****** 2026-01-08 00:42:38.012488 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181) 2026-01-08 00:42:38.012499 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181) 2026-01-08 00:42:38.012510 | orchestrator | 2026-01-08 00:42:38.012520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012589 | orchestrator | Thursday 08 January 2026 00:42:33 +0000 (0:00:00.453) 0:00:19.108 ****** 2026-01-08 00:42:38.012609 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd) 2026-01-08 00:42:38.012621 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd) 2026-01-08 00:42:38.012633 | orchestrator | 2026-01-08 00:42:38.012653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:38.012664 | orchestrator | Thursday 08 January 2026 00:42:34 +0000 (0:00:00.437) 0:00:19.545 ****** 2026-01-08 00:42:38.012676 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-08 00:42:38.012687 | orchestrator | 2026-01-08 00:42:38.012698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.012709 | orchestrator | Thursday 08 January 2026 00:42:34 +0000 (0:00:00.335) 0:00:19.880 ****** 2026-01-08 00:42:38.012730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-08 00:42:38.012742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-08 00:42:38.012753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-08 00:42:38.012763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-08 00:42:38.012774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-08 00:42:38.012785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-08 00:42:38.012796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-08 00:42:38.012807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-08 00:42:38.012817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-08 00:42:38.012828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-08 00:42:38.012839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-08 00:42:38.012849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-08 00:42:38.012860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-08 00:42:38.012871 | orchestrator | 2026-01-08 00:42:38.012882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.012893 | orchestrator | Thursday 08 January 2026 00:42:34 +0000 (0:00:00.376) 0:00:20.257 ****** 2026-01-08 00:42:38.012904 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.012915 | orchestrator | 2026-01-08 00:42:38.012926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.012937 | orchestrator | Thursday 08 January 2026 00:42:35 +0000 (0:00:00.671) 0:00:20.929 ****** 2026-01-08 00:42:38.012948 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.012959 | orchestrator | 2026-01-08 00:42:38.012970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.012981 | orchestrator | Thursday 08 January 2026 00:42:35 +0000 (0:00:00.202) 0:00:21.132 ****** 2026-01-08 00:42:38.012992 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.013003 | orchestrator | 2026-01-08 00:42:38.013014 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.013025 | orchestrator | Thursday 08 January 2026 00:42:35 +0000 (0:00:00.211) 0:00:21.344 ****** 2026-01-08 00:42:38.013036 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.013047 | orchestrator | 2026-01-08 00:42:38.013058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.013069 | orchestrator | Thursday 08 January 2026 00:42:36 +0000 (0:00:00.203) 0:00:21.547 ****** 2026-01-08 00:42:38.013080 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.013090 | orchestrator | 2026-01-08 00:42:38.013101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.013112 | orchestrator | Thursday 08 January 2026 00:42:36 +0000 (0:00:00.223) 0:00:21.771 ****** 2026-01-08 00:42:38.013123 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.013134 | orchestrator | 2026-01-08 00:42:38.013145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.013156 | orchestrator | Thursday 08 January 2026 00:42:36 +0000 (0:00:00.200) 0:00:21.971 ****** 2026-01-08 00:42:38.013167 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.013178 | orchestrator | 2026-01-08 00:42:38.013189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.013199 | orchestrator | Thursday 08 January 2026 00:42:36 +0000 (0:00:00.215) 0:00:22.187 ****** 2026-01-08 00:42:38.013217 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:38.013228 | orchestrator | 2026-01-08 00:42:38.013239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.013250 | orchestrator | Thursday 08 January 2026 00:42:36 +0000 (0:00:00.215) 0:00:22.403 ****** 2026-01-08 00:42:38.013260 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-08 00:42:38.013272 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-08 00:42:38.013283 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-08 00:42:38.013294 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-08 00:42:38.013305 | orchestrator | 2026-01-08 00:42:38.013316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:38.013327 | orchestrator | Thursday 08 January 2026 00:42:37 +0000 (0:00:00.858) 0:00:23.261 ****** 2026-01-08 00:42:38.013338 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.318953 | orchestrator | 2026-01-08 00:42:44.319771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:44.319813 | orchestrator | Thursday 08 January 2026 00:42:38 +0000 (0:00:00.184) 0:00:23.445 ****** 2026-01-08 00:42:44.319826 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.319838 | orchestrator | 2026-01-08 00:42:44.319850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:44.319880 | orchestrator | Thursday 08 January 2026 00:42:38 +0000 (0:00:00.193) 0:00:23.639 ****** 2026-01-08 00:42:44.319892 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.319903 | orchestrator | 2026-01-08 00:42:44.319914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:44.319925 | orchestrator | Thursday 08 January 2026 00:42:38 +0000 (0:00:00.193) 0:00:23.833 ****** 2026-01-08 00:42:44.319936 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.319947 | orchestrator | 2026-01-08 00:42:44.319958 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-08 00:42:44.319969 | orchestrator | Thursday 08 January 2026 00:42:39 +0000 (0:00:00.671) 0:00:24.505 ****** 2026-01-08 00:42:44.319980 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-08 00:42:44.319991 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-08 00:42:44.320002 | orchestrator | 2026-01-08 00:42:44.320013 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-08 00:42:44.320023 | orchestrator | Thursday 08 January 2026 00:42:39 +0000 (0:00:00.182) 0:00:24.688 ****** 2026-01-08 00:42:44.320034 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.320045 | orchestrator | 2026-01-08 00:42:44.320056 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-08 00:42:44.320067 | orchestrator | Thursday 08 January 2026 00:42:39 +0000 (0:00:00.131) 0:00:24.820 ****** 2026-01-08 00:42:44.320078 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.320089 | orchestrator | 2026-01-08 00:42:44.320100 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-08 00:42:44.320111 | orchestrator | Thursday 08 January 2026 00:42:39 +0000 (0:00:00.143) 0:00:24.963 ****** 2026-01-08 00:42:44.320121 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.320132 | orchestrator | 2026-01-08 00:42:44.320143 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-08 00:42:44.320154 | orchestrator | Thursday 08 January 2026 00:42:39 +0000 (0:00:00.159) 0:00:25.122 ****** 2026-01-08 00:42:44.320164 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:42:44.320176 | orchestrator | 2026-01-08 00:42:44.320187 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-08 00:42:44.320198 | orchestrator | Thursday 08 January 2026 00:42:39 +0000 (0:00:00.141) 0:00:25.264 ****** 2026-01-08 00:42:44.320209 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '738668c3-85d9-5999-8ba6-58353e2d69fe'}}) 2026-01-08 00:42:44.320221 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3efd50ac-0c86-56a3-96dd-80e79744aaab'}}) 2026-01-08 00:42:44.320255 | orchestrator | 2026-01-08 00:42:44.320266 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-08 00:42:44.320277 | orchestrator | Thursday 08 January 2026 00:42:40 +0000 (0:00:00.173) 0:00:25.437 ****** 2026-01-08 00:42:44.320288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '738668c3-85d9-5999-8ba6-58353e2d69fe'}})  2026-01-08 00:42:44.320301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3efd50ac-0c86-56a3-96dd-80e79744aaab'}})  2026-01-08 00:42:44.320316 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.320336 | orchestrator | 2026-01-08 00:42:44.320355 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-08 00:42:44.320376 | orchestrator | Thursday 08 January 2026 00:42:40 +0000 (0:00:00.177) 0:00:25.615 ****** 2026-01-08 00:42:44.320395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '738668c3-85d9-5999-8ba6-58353e2d69fe'}})  2026-01-08 00:42:44.320414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3efd50ac-0c86-56a3-96dd-80e79744aaab'}})  2026-01-08 00:42:44.320435 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.320457 | orchestrator | 2026-01-08 00:42:44.320476 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-08 00:42:44.320497 | orchestrator | Thursday 08 January 2026 00:42:40 +0000 (0:00:00.167) 0:00:25.782 ****** 2026-01-08 00:42:44.320518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '738668c3-85d9-5999-8ba6-58353e2d69fe'}})  2026-01-08 00:42:44.320567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3efd50ac-0c86-56a3-96dd-80e79744aaab'}})  2026-01-08 00:42:44.320586 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.320605 | orchestrator | 2026-01-08 00:42:44.320622 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-08 00:42:44.320639 | orchestrator | Thursday 08 January 2026 00:42:40 +0000 (0:00:00.169) 0:00:25.952 ****** 2026-01-08 00:42:44.320656 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:42:44.320673 | orchestrator | 2026-01-08 00:42:44.320692 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-08 00:42:44.320711 | orchestrator | Thursday 08 January 2026 00:42:40 +0000 (0:00:00.136) 0:00:26.088 ****** 2026-01-08 00:42:44.320730 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:42:44.320744 | orchestrator | 2026-01-08 00:42:44.320755 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-08 00:42:44.320766 | orchestrator | Thursday 08 January 2026 00:42:40 +0000 (0:00:00.127) 0:00:26.215 ****** 2026-01-08 00:42:44.320801 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.320812 | orchestrator | 2026-01-08 00:42:44.320823 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-08 00:42:44.320834 | orchestrator | Thursday 08 January 2026 00:42:41 +0000 (0:00:00.364) 0:00:26.579 ****** 2026-01-08 00:42:44.320845 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.320856 | orchestrator | 2026-01-08 00:42:44.320866 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-08 00:42:44.320877 | orchestrator | Thursday 08 January 2026 00:42:41 +0000 (0:00:00.136) 0:00:26.716 ****** 2026-01-08 00:42:44.320888 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.320899 | orchestrator | 2026-01-08 00:42:44.320910 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-08 00:42:44.320920 | orchestrator | Thursday 08 January 2026 00:42:41 +0000 (0:00:00.138) 0:00:26.854 ****** 2026-01-08 00:42:44.320932 | orchestrator | ok: [testbed-node-4] => { 2026-01-08 00:42:44.320942 | orchestrator |  "ceph_osd_devices": { 2026-01-08 00:42:44.320954 | orchestrator |  "sdb": { 2026-01-08 00:42:44.320965 | orchestrator |  "osd_lvm_uuid": "738668c3-85d9-5999-8ba6-58353e2d69fe" 2026-01-08 00:42:44.320987 | orchestrator |  }, 2026-01-08 00:42:44.320998 | orchestrator |  "sdc": { 2026-01-08 00:42:44.321017 | orchestrator |  "osd_lvm_uuid": "3efd50ac-0c86-56a3-96dd-80e79744aaab" 2026-01-08 00:42:44.321028 | orchestrator |  } 2026-01-08 00:42:44.321039 | orchestrator |  } 2026-01-08 00:42:44.321050 | orchestrator | } 2026-01-08 00:42:44.321061 | orchestrator | 2026-01-08 00:42:44.321072 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-08 00:42:44.321083 | orchestrator | Thursday 08 January 2026 00:42:41 +0000 (0:00:00.148) 0:00:27.003 ****** 2026-01-08 00:42:44.321093 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.321104 | orchestrator | 2026-01-08 00:42:44.321115 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-08 00:42:44.321126 | orchestrator | Thursday 08 January 2026 00:42:41 +0000 (0:00:00.162) 0:00:27.165 ****** 2026-01-08 00:42:44.321137 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.321147 | orchestrator | 2026-01-08 00:42:44.321158 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-08 00:42:44.321169 | orchestrator | Thursday 08 January 2026 00:42:41 +0000 (0:00:00.149) 0:00:27.314 ****** 2026-01-08 00:42:44.321180 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:42:44.321191 | orchestrator | 2026-01-08 00:42:44.321201 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-08 00:42:44.321212 | orchestrator | Thursday 08 January 2026 00:42:42 +0000 (0:00:00.174) 0:00:27.488 ****** 2026-01-08 00:42:44.321223 | orchestrator | changed: [testbed-node-4] => { 2026-01-08 00:42:44.321234 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-08 00:42:44.321245 | orchestrator |  "ceph_osd_devices": { 2026-01-08 00:42:44.321256 | orchestrator |  "sdb": { 2026-01-08 00:42:44.321272 | orchestrator |  "osd_lvm_uuid": "738668c3-85d9-5999-8ba6-58353e2d69fe" 2026-01-08 00:42:44.321283 | orchestrator |  }, 2026-01-08 00:42:44.321294 | orchestrator |  "sdc": { 2026-01-08 00:42:44.321305 | orchestrator |  "osd_lvm_uuid": "3efd50ac-0c86-56a3-96dd-80e79744aaab" 2026-01-08 00:42:44.321316 | orchestrator |  } 2026-01-08 00:42:44.321327 | orchestrator |  }, 2026-01-08 00:42:44.321337 | orchestrator |  "lvm_volumes": [ 2026-01-08 00:42:44.321348 | orchestrator |  { 2026-01-08 00:42:44.321359 | orchestrator |  "data": "osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe", 2026-01-08 00:42:44.321370 | orchestrator |  "data_vg": "ceph-738668c3-85d9-5999-8ba6-58353e2d69fe" 2026-01-08 00:42:44.321381 | orchestrator |  }, 2026-01-08 00:42:44.321392 | orchestrator |  { 2026-01-08 00:42:44.321402 | orchestrator |  "data": "osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab", 2026-01-08 00:42:44.321413 | orchestrator |  "data_vg": "ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab" 2026-01-08 00:42:44.321424 | orchestrator |  } 2026-01-08 00:42:44.321435 | orchestrator |  ] 2026-01-08 00:42:44.321445 | orchestrator |  } 2026-01-08 00:42:44.321456 | orchestrator | } 2026-01-08 00:42:44.321467 | orchestrator | 2026-01-08 00:42:44.321478 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-08 00:42:44.321489 | orchestrator | Thursday 08 January 2026 00:42:42 +0000 (0:00:00.178) 0:00:27.666 ****** 2026-01-08 00:42:44.321499 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-08 00:42:44.321510 | orchestrator | 2026-01-08 00:42:44.321543 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-08 00:42:44.321555 | orchestrator | 2026-01-08 00:42:44.321566 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-08 00:42:44.321577 | orchestrator | Thursday 08 January 2026 00:42:43 +0000 (0:00:01.032) 0:00:28.699 ****** 2026-01-08 00:42:44.321588 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-08 00:42:44.321599 | orchestrator | 2026-01-08 00:42:44.321616 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-08 00:42:44.321646 | orchestrator | Thursday 08 January 2026 00:42:43 +0000 (0:00:00.504) 0:00:29.203 ****** 2026-01-08 00:42:44.321666 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:42:44.321686 | orchestrator | 2026-01-08 00:42:44.321698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:44.321709 | orchestrator | Thursday 08 January 2026 00:42:43 +0000 (0:00:00.209) 0:00:29.413 ****** 2026-01-08 00:42:44.321719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-08 00:42:44.321730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-08 00:42:44.321741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-08 00:42:44.321751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-08 00:42:44.321762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-08 00:42:44.321781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-08 00:42:52.301638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-08 00:42:52.301792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-08 00:42:52.301806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-08 00:42:52.301815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-08 00:42:52.301824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-08 00:42:52.301833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-08 00:42:52.301841 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-08 00:42:52.301850 | orchestrator | 2026-01-08 00:42:52.301860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.301870 | orchestrator | Thursday 08 January 2026 00:42:44 +0000 (0:00:00.335) 0:00:29.749 ****** 2026-01-08 00:42:52.301879 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.301888 | orchestrator | 2026-01-08 00:42:52.301897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.301906 | orchestrator | Thursday 08 January 2026 00:42:44 +0000 (0:00:00.168) 0:00:29.918 ****** 2026-01-08 00:42:52.301918 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.301932 | orchestrator | 2026-01-08 00:42:52.301948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.301963 | orchestrator | Thursday 08 January 2026 00:42:44 +0000 (0:00:00.194) 0:00:30.113 ****** 2026-01-08 00:42:52.301978 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.301993 | orchestrator | 2026-01-08 00:42:52.302009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302068 | orchestrator | Thursday 08 January 2026 00:42:44 +0000 (0:00:00.190) 0:00:30.304 ****** 2026-01-08 00:42:52.302083 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302099 | orchestrator | 2026-01-08 00:42:52.302115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302130 | orchestrator | Thursday 08 January 2026 00:42:45 +0000 (0:00:00.199) 0:00:30.503 ****** 2026-01-08 00:42:52.302141 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302151 | orchestrator | 2026-01-08 00:42:52.302161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302171 | orchestrator | Thursday 08 January 2026 00:42:45 +0000 (0:00:00.182) 0:00:30.686 ****** 2026-01-08 00:42:52.302181 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302192 | orchestrator | 2026-01-08 00:42:52.302221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302251 | orchestrator | Thursday 08 January 2026 00:42:45 +0000 (0:00:00.163) 0:00:30.849 ****** 2026-01-08 00:42:52.302262 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302272 | orchestrator | 2026-01-08 00:42:52.302283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302292 | orchestrator | Thursday 08 January 2026 00:42:45 +0000 (0:00:00.185) 0:00:31.035 ****** 2026-01-08 00:42:52.302304 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302318 | orchestrator | 2026-01-08 00:42:52.302333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302347 | orchestrator | Thursday 08 January 2026 00:42:45 +0000 (0:00:00.199) 0:00:31.234 ****** 2026-01-08 00:42:52.302361 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa) 2026-01-08 00:42:52.302376 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa) 2026-01-08 00:42:52.302390 | orchestrator | 2026-01-08 00:42:52.302405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302422 | orchestrator | Thursday 08 January 2026 00:42:46 +0000 (0:00:00.699) 0:00:31.934 ****** 2026-01-08 00:42:52.302436 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490) 2026-01-08 00:42:52.302452 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490) 2026-01-08 00:42:52.302466 | orchestrator | 2026-01-08 00:42:52.302480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302495 | orchestrator | Thursday 08 January 2026 00:42:46 +0000 (0:00:00.361) 0:00:32.295 ****** 2026-01-08 00:42:52.302511 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0) 2026-01-08 00:42:52.302551 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0) 2026-01-08 00:42:52.302565 | orchestrator | 2026-01-08 00:42:52.302579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302588 | orchestrator | Thursday 08 January 2026 00:42:47 +0000 (0:00:00.394) 0:00:32.690 ****** 2026-01-08 00:42:52.302597 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42) 2026-01-08 00:42:52.302606 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42) 2026-01-08 00:42:52.302614 | orchestrator | 2026-01-08 00:42:52.302623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:42:52.302632 | orchestrator | Thursday 08 January 2026 00:42:47 +0000 (0:00:00.412) 0:00:33.102 ****** 2026-01-08 00:42:52.302641 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-08 00:42:52.302650 | orchestrator | 2026-01-08 00:42:52.302659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.302686 | orchestrator | Thursday 08 January 2026 00:42:48 +0000 (0:00:00.335) 0:00:33.438 ****** 2026-01-08 00:42:52.302696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-08 00:42:52.302705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-08 00:42:52.302714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-08 00:42:52.302722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-08 00:42:52.302731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-08 00:42:52.302740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-08 00:42:52.302749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-08 00:42:52.302758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-08 00:42:52.302776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-08 00:42:52.302785 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-08 00:42:52.302793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-08 00:42:52.302802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-08 00:42:52.302811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-08 00:42:52.302819 | orchestrator | 2026-01-08 00:42:52.302828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.302837 | orchestrator | Thursday 08 January 2026 00:42:48 +0000 (0:00:00.486) 0:00:33.924 ****** 2026-01-08 00:42:52.302846 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302855 | orchestrator | 2026-01-08 00:42:52.302863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.302872 | orchestrator | Thursday 08 January 2026 00:42:48 +0000 (0:00:00.273) 0:00:34.198 ****** 2026-01-08 00:42:52.302881 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302889 | orchestrator | 2026-01-08 00:42:52.302898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.302907 | orchestrator | Thursday 08 January 2026 00:42:49 +0000 (0:00:00.258) 0:00:34.457 ****** 2026-01-08 00:42:52.302916 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302924 | orchestrator | 2026-01-08 00:42:52.302933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.302942 | orchestrator | Thursday 08 January 2026 00:42:49 +0000 (0:00:00.212) 0:00:34.670 ****** 2026-01-08 00:42:52.302951 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302960 | orchestrator | 2026-01-08 00:42:52.302968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.302977 | orchestrator | Thursday 08 January 2026 00:42:49 +0000 (0:00:00.201) 0:00:34.871 ****** 2026-01-08 00:42:52.302986 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.302995 | orchestrator | 2026-01-08 00:42:52.303003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.303012 | orchestrator | Thursday 08 January 2026 00:42:49 +0000 (0:00:00.234) 0:00:35.106 ****** 2026-01-08 00:42:52.303021 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.303030 | orchestrator | 2026-01-08 00:42:52.303039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.303048 | orchestrator | Thursday 08 January 2026 00:42:50 +0000 (0:00:00.657) 0:00:35.763 ****** 2026-01-08 00:42:52.303056 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.303065 | orchestrator | 2026-01-08 00:42:52.303074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.303082 | orchestrator | Thursday 08 January 2026 00:42:50 +0000 (0:00:00.202) 0:00:35.966 ****** 2026-01-08 00:42:52.303091 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.303100 | orchestrator | 2026-01-08 00:42:52.303108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.303117 | orchestrator | Thursday 08 January 2026 00:42:50 +0000 (0:00:00.197) 0:00:36.163 ****** 2026-01-08 00:42:52.303126 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-08 00:42:52.303135 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-08 00:42:52.303144 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-08 00:42:52.303153 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-08 00:42:52.303162 | orchestrator | 2026-01-08 00:42:52.303170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.303179 | orchestrator | Thursday 08 January 2026 00:42:51 +0000 (0:00:00.663) 0:00:36.826 ****** 2026-01-08 00:42:52.303188 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.303202 | orchestrator | 2026-01-08 00:42:52.303211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.303226 | orchestrator | Thursday 08 January 2026 00:42:51 +0000 (0:00:00.196) 0:00:37.023 ****** 2026-01-08 00:42:52.303236 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.303244 | orchestrator | 2026-01-08 00:42:52.303253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.303262 | orchestrator | Thursday 08 January 2026 00:42:51 +0000 (0:00:00.222) 0:00:37.246 ****** 2026-01-08 00:42:52.303271 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.303279 | orchestrator | 2026-01-08 00:42:52.303288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:42:52.303297 | orchestrator | Thursday 08 January 2026 00:42:52 +0000 (0:00:00.224) 0:00:37.470 ****** 2026-01-08 00:42:52.303320 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:52.303329 | orchestrator | 2026-01-08 00:42:52.303352 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-08 00:42:56.629223 | orchestrator | Thursday 08 January 2026 00:42:52 +0000 (0:00:00.237) 0:00:37.707 ****** 2026-01-08 00:42:56.629335 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-08 00:42:56.629349 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-08 00:42:56.629359 | orchestrator | 2026-01-08 00:42:56.629376 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-08 00:42:56.629392 | orchestrator | Thursday 08 January 2026 00:42:52 +0000 (0:00:00.212) 0:00:37.920 ****** 2026-01-08 00:42:56.629412 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.629433 | orchestrator | 2026-01-08 00:42:56.629449 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-08 00:42:56.629465 | orchestrator | Thursday 08 January 2026 00:42:52 +0000 (0:00:00.152) 0:00:38.072 ****** 2026-01-08 00:42:56.629480 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.629495 | orchestrator | 2026-01-08 00:42:56.629511 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-08 00:42:56.629551 | orchestrator | Thursday 08 January 2026 00:42:52 +0000 (0:00:00.127) 0:00:38.199 ****** 2026-01-08 00:42:56.629566 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.629579 | orchestrator | 2026-01-08 00:42:56.629593 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-08 00:42:56.629608 | orchestrator | Thursday 08 January 2026 00:42:53 +0000 (0:00:00.413) 0:00:38.613 ****** 2026-01-08 00:42:56.629622 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:42:56.629637 | orchestrator | 2026-01-08 00:42:56.629653 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-08 00:42:56.629668 | orchestrator | Thursday 08 January 2026 00:42:53 +0000 (0:00:00.135) 0:00:38.748 ****** 2026-01-08 00:42:56.629684 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e7c35fc3-220b-5a3c-9d36-601219d17f28'}}) 2026-01-08 00:42:56.629699 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1538380d-5182-5482-9616-e6fa16e7f592'}}) 2026-01-08 00:42:56.629713 | orchestrator | 2026-01-08 00:42:56.629727 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-08 00:42:56.629743 | orchestrator | Thursday 08 January 2026 00:42:53 +0000 (0:00:00.172) 0:00:38.921 ****** 2026-01-08 00:42:56.629757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e7c35fc3-220b-5a3c-9d36-601219d17f28'}})  2026-01-08 00:42:56.629792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1538380d-5182-5482-9616-e6fa16e7f592'}})  2026-01-08 00:42:56.629802 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.629811 | orchestrator | 2026-01-08 00:42:56.629820 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-08 00:42:56.629829 | orchestrator | Thursday 08 January 2026 00:42:53 +0000 (0:00:00.156) 0:00:39.078 ****** 2026-01-08 00:42:56.629856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e7c35fc3-220b-5a3c-9d36-601219d17f28'}})  2026-01-08 00:42:56.629865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1538380d-5182-5482-9616-e6fa16e7f592'}})  2026-01-08 00:42:56.629874 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.629882 | orchestrator | 2026-01-08 00:42:56.629891 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-08 00:42:56.629900 | orchestrator | Thursday 08 January 2026 00:42:53 +0000 (0:00:00.140) 0:00:39.219 ****** 2026-01-08 00:42:56.629909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e7c35fc3-220b-5a3c-9d36-601219d17f28'}})  2026-01-08 00:42:56.629919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1538380d-5182-5482-9616-e6fa16e7f592'}})  2026-01-08 00:42:56.629930 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.629941 | orchestrator | 2026-01-08 00:42:56.629952 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-08 00:42:56.629963 | orchestrator | Thursday 08 January 2026 00:42:53 +0000 (0:00:00.154) 0:00:39.374 ****** 2026-01-08 00:42:56.629974 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:42:56.629985 | orchestrator | 2026-01-08 00:42:56.629996 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-08 00:42:56.630006 | orchestrator | Thursday 08 January 2026 00:42:54 +0000 (0:00:00.140) 0:00:39.514 ****** 2026-01-08 00:42:56.630066 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:42:56.630080 | orchestrator | 2026-01-08 00:42:56.630091 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-08 00:42:56.630102 | orchestrator | Thursday 08 January 2026 00:42:54 +0000 (0:00:00.160) 0:00:39.675 ****** 2026-01-08 00:42:56.630113 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.630124 | orchestrator | 2026-01-08 00:42:56.630135 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-08 00:42:56.630146 | orchestrator | Thursday 08 January 2026 00:42:54 +0000 (0:00:00.135) 0:00:39.810 ****** 2026-01-08 00:42:56.630157 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.630168 | orchestrator | 2026-01-08 00:42:56.630178 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-08 00:42:56.630189 | orchestrator | Thursday 08 January 2026 00:42:54 +0000 (0:00:00.122) 0:00:39.933 ****** 2026-01-08 00:42:56.630200 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.630211 | orchestrator | 2026-01-08 00:42:56.630222 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-08 00:42:56.630233 | orchestrator | Thursday 08 January 2026 00:42:54 +0000 (0:00:00.135) 0:00:40.069 ****** 2026-01-08 00:42:56.630244 | orchestrator | ok: [testbed-node-5] => { 2026-01-08 00:42:56.630255 | orchestrator |  "ceph_osd_devices": { 2026-01-08 00:42:56.630266 | orchestrator |  "sdb": { 2026-01-08 00:42:56.630299 | orchestrator |  "osd_lvm_uuid": "e7c35fc3-220b-5a3c-9d36-601219d17f28" 2026-01-08 00:42:56.630311 | orchestrator |  }, 2026-01-08 00:42:56.630322 | orchestrator |  "sdc": { 2026-01-08 00:42:56.630333 | orchestrator |  "osd_lvm_uuid": "1538380d-5182-5482-9616-e6fa16e7f592" 2026-01-08 00:42:56.630344 | orchestrator |  } 2026-01-08 00:42:56.630355 | orchestrator |  } 2026-01-08 00:42:56.630366 | orchestrator | } 2026-01-08 00:42:56.630377 | orchestrator | 2026-01-08 00:42:56.630388 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-08 00:42:56.630399 | orchestrator | Thursday 08 January 2026 00:42:54 +0000 (0:00:00.115) 0:00:40.184 ****** 2026-01-08 00:42:56.630410 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.630421 | orchestrator | 2026-01-08 00:42:56.630432 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-08 00:42:56.630443 | orchestrator | Thursday 08 January 2026 00:42:54 +0000 (0:00:00.129) 0:00:40.313 ****** 2026-01-08 00:42:56.630462 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.630473 | orchestrator | 2026-01-08 00:42:56.630483 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-08 00:42:56.630494 | orchestrator | Thursday 08 January 2026 00:42:55 +0000 (0:00:00.258) 0:00:40.571 ****** 2026-01-08 00:42:56.630505 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:42:56.630553 | orchestrator | 2026-01-08 00:42:56.630565 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-08 00:42:56.630577 | orchestrator | Thursday 08 January 2026 00:42:55 +0000 (0:00:00.094) 0:00:40.666 ****** 2026-01-08 00:42:56.630588 | orchestrator | changed: [testbed-node-5] => { 2026-01-08 00:42:56.630599 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-08 00:42:56.630610 | orchestrator |  "ceph_osd_devices": { 2026-01-08 00:42:56.630621 | orchestrator |  "sdb": { 2026-01-08 00:42:56.630632 | orchestrator |  "osd_lvm_uuid": "e7c35fc3-220b-5a3c-9d36-601219d17f28" 2026-01-08 00:42:56.630643 | orchestrator |  }, 2026-01-08 00:42:56.630654 | orchestrator |  "sdc": { 2026-01-08 00:42:56.630665 | orchestrator |  "osd_lvm_uuid": "1538380d-5182-5482-9616-e6fa16e7f592" 2026-01-08 00:42:56.630676 | orchestrator |  } 2026-01-08 00:42:56.630686 | orchestrator |  }, 2026-01-08 00:42:56.630697 | orchestrator |  "lvm_volumes": [ 2026-01-08 00:42:56.630708 | orchestrator |  { 2026-01-08 00:42:56.630719 | orchestrator |  "data": "osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28", 2026-01-08 00:42:56.630730 | orchestrator |  "data_vg": "ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28" 2026-01-08 00:42:56.630741 | orchestrator |  }, 2026-01-08 00:42:56.630752 | orchestrator |  { 2026-01-08 00:42:56.630763 | orchestrator |  "data": "osd-block-1538380d-5182-5482-9616-e6fa16e7f592", 2026-01-08 00:42:56.630783 | orchestrator |  "data_vg": "ceph-1538380d-5182-5482-9616-e6fa16e7f592" 2026-01-08 00:42:56.630795 | orchestrator |  } 2026-01-08 00:42:56.630811 | orchestrator |  ] 2026-01-08 00:42:56.630822 | orchestrator |  } 2026-01-08 00:42:56.630833 | orchestrator | } 2026-01-08 00:42:56.630844 | orchestrator | 2026-01-08 00:42:56.630855 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-08 00:42:56.630866 | orchestrator | Thursday 08 January 2026 00:42:55 +0000 (0:00:00.149) 0:00:40.815 ****** 2026-01-08 00:42:56.630877 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-08 00:42:56.630888 | orchestrator | 2026-01-08 00:42:56.630899 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:42:56.630910 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-08 00:42:56.630923 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-08 00:42:56.630934 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-08 00:42:56.630945 | orchestrator | 2026-01-08 00:42:56.630956 | orchestrator | 2026-01-08 00:42:56.630967 | orchestrator | 2026-01-08 00:42:56.630978 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:42:56.630989 | orchestrator | Thursday 08 January 2026 00:42:56 +0000 (0:00:01.094) 0:00:41.909 ****** 2026-01-08 00:42:56.631000 | orchestrator | =============================================================================== 2026-01-08 00:42:56.631011 | orchestrator | Write configuration file ------------------------------------------------ 3.97s 2026-01-08 00:42:56.631022 | orchestrator | Add known partitions to the list of available block devices ------------- 1.26s 2026-01-08 00:42:56.631032 | orchestrator | Add known links to the list of available block devices ------------------ 1.20s 2026-01-08 00:42:56.631043 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.02s 2026-01-08 00:42:56.631061 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-01-08 00:42:56.631072 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-01-08 00:42:56.631083 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2026-01-08 00:42:56.631094 | orchestrator | Print configuration data ------------------------------------------------ 0.74s 2026-01-08 00:42:56.631104 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.71s 2026-01-08 00:42:56.631115 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-01-08 00:42:56.631126 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-01-08 00:42:56.631137 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-01-08 00:42:56.631148 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-01-08 00:42:56.631165 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-01-08 00:42:56.736026 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-01-08 00:42:56.736142 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.66s 2026-01-08 00:42:56.736158 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-01-08 00:42:56.736170 | orchestrator | Set DB devices config data ---------------------------------------------- 0.63s 2026-01-08 00:42:56.736181 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-01-08 00:42:56.736192 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-01-08 00:43:19.286463 | orchestrator | 2026-01-08 00:43:19 | INFO  | Task 4df2df65-77d4-4dec-abac-9fea280ca821 (sync inventory) is running in background. Output coming soon. 2026-01-08 00:43:45.642897 | orchestrator | 2026-01-08 00:43:20 | INFO  | Starting group_vars file reorganization 2026-01-08 00:43:45.642978 | orchestrator | 2026-01-08 00:43:20 | INFO  | Moved 0 file(s) to their respective directories 2026-01-08 00:43:45.642985 | orchestrator | 2026-01-08 00:43:20 | INFO  | Group_vars file reorganization completed 2026-01-08 00:43:45.642989 | orchestrator | 2026-01-08 00:43:23 | INFO  | Starting variable preparation from inventory 2026-01-08 00:43:45.642994 | orchestrator | 2026-01-08 00:43:26 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-08 00:43:45.642999 | orchestrator | 2026-01-08 00:43:26 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-08 00:43:45.643003 | orchestrator | 2026-01-08 00:43:26 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-08 00:43:45.643007 | orchestrator | 2026-01-08 00:43:26 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-08 00:43:45.643011 | orchestrator | 2026-01-08 00:43:26 | INFO  | Variable preparation completed 2026-01-08 00:43:45.643016 | orchestrator | 2026-01-08 00:43:28 | INFO  | Starting inventory overwrite handling 2026-01-08 00:43:45.643020 | orchestrator | 2026-01-08 00:43:28 | INFO  | Handling group overwrites in 99-overwrite 2026-01-08 00:43:45.643024 | orchestrator | 2026-01-08 00:43:28 | INFO  | Removing group frr:children from 60-generic 2026-01-08 00:43:45.643028 | orchestrator | 2026-01-08 00:43:28 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-08 00:43:45.643032 | orchestrator | 2026-01-08 00:43:28 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-08 00:43:45.643036 | orchestrator | 2026-01-08 00:43:28 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-08 00:43:45.643040 | orchestrator | 2026-01-08 00:43:28 | INFO  | Handling group overwrites in 20-roles 2026-01-08 00:43:45.643060 | orchestrator | 2026-01-08 00:43:28 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-08 00:43:45.643065 | orchestrator | 2026-01-08 00:43:28 | INFO  | Removed 5 group(s) in total 2026-01-08 00:43:45.643068 | orchestrator | 2026-01-08 00:43:28 | INFO  | Inventory overwrite handling completed 2026-01-08 00:43:45.643072 | orchestrator | 2026-01-08 00:43:29 | INFO  | Starting merge of inventory files 2026-01-08 00:43:45.643076 | orchestrator | 2026-01-08 00:43:29 | INFO  | Inventory files merged successfully 2026-01-08 00:43:45.643080 | orchestrator | 2026-01-08 00:43:33 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-08 00:43:45.643084 | orchestrator | 2026-01-08 00:43:44 | INFO  | Successfully wrote ClusterShell configuration 2026-01-08 00:43:45.643088 | orchestrator | [master ee2fc36] 2026-01-08-00-43 2026-01-08 00:43:45.643093 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-08 00:43:47.373942 | orchestrator | 2026-01-08 00:43:47 | INFO  | Task 09ebbea4-c8b1-4d15-a642-3507983f7bed (ceph-create-lvm-devices) was prepared for execution. 2026-01-08 00:43:47.374053 | orchestrator | 2026-01-08 00:43:47 | INFO  | It takes a moment until task 09ebbea4-c8b1-4d15-a642-3507983f7bed (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-08 00:43:58.112764 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-08 00:43:58.112871 | orchestrator | 2.16.14 2026-01-08 00:43:58.112887 | orchestrator | 2026-01-08 00:43:58.112900 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-08 00:43:58.112912 | orchestrator | 2026-01-08 00:43:58.112924 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-08 00:43:58.112935 | orchestrator | Thursday 08 January 2026 00:43:51 +0000 (0:00:00.301) 0:00:00.301 ****** 2026-01-08 00:43:58.112947 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 00:43:58.112958 | orchestrator | 2026-01-08 00:43:58.112969 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-08 00:43:58.112980 | orchestrator | Thursday 08 January 2026 00:43:51 +0000 (0:00:00.246) 0:00:00.548 ****** 2026-01-08 00:43:58.112992 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:43:58.113003 | orchestrator | 2026-01-08 00:43:58.113015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113027 | orchestrator | Thursday 08 January 2026 00:43:52 +0000 (0:00:00.246) 0:00:00.795 ****** 2026-01-08 00:43:58.113043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-08 00:43:58.113063 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-08 00:43:58.113084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-08 00:43:58.113104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-08 00:43:58.113124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-08 00:43:58.113144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-08 00:43:58.113163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-08 00:43:58.113182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-08 00:43:58.113206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-08 00:43:58.113253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-08 00:43:58.113276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-08 00:43:58.113295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-08 00:43:58.113345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-08 00:43:58.113366 | orchestrator | 2026-01-08 00:43:58.113387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113405 | orchestrator | Thursday 08 January 2026 00:43:52 +0000 (0:00:00.533) 0:00:01.328 ****** 2026-01-08 00:43:58.113444 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.113457 | orchestrator | 2026-01-08 00:43:58.113471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113511 | orchestrator | Thursday 08 January 2026 00:43:52 +0000 (0:00:00.217) 0:00:01.545 ****** 2026-01-08 00:43:58.113525 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.113538 | orchestrator | 2026-01-08 00:43:58.113557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113570 | orchestrator | Thursday 08 January 2026 00:43:53 +0000 (0:00:00.175) 0:00:01.720 ****** 2026-01-08 00:43:58.113582 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.113594 | orchestrator | 2026-01-08 00:43:58.113607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113620 | orchestrator | Thursday 08 January 2026 00:43:53 +0000 (0:00:00.190) 0:00:01.910 ****** 2026-01-08 00:43:58.113633 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.113646 | orchestrator | 2026-01-08 00:43:58.113659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113671 | orchestrator | Thursday 08 January 2026 00:43:53 +0000 (0:00:00.187) 0:00:02.098 ****** 2026-01-08 00:43:58.113683 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.113694 | orchestrator | 2026-01-08 00:43:58.113705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113716 | orchestrator | Thursday 08 January 2026 00:43:53 +0000 (0:00:00.192) 0:00:02.290 ****** 2026-01-08 00:43:58.113727 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.113738 | orchestrator | 2026-01-08 00:43:58.113749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113759 | orchestrator | Thursday 08 January 2026 00:43:53 +0000 (0:00:00.183) 0:00:02.473 ****** 2026-01-08 00:43:58.113770 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.113781 | orchestrator | 2026-01-08 00:43:58.113792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113802 | orchestrator | Thursday 08 January 2026 00:43:53 +0000 (0:00:00.188) 0:00:02.662 ****** 2026-01-08 00:43:58.113813 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.113824 | orchestrator | 2026-01-08 00:43:58.113835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113845 | orchestrator | Thursday 08 January 2026 00:43:54 +0000 (0:00:00.158) 0:00:02.821 ****** 2026-01-08 00:43:58.113856 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2) 2026-01-08 00:43:58.113869 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2) 2026-01-08 00:43:58.113880 | orchestrator | 2026-01-08 00:43:58.113891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113923 | orchestrator | Thursday 08 January 2026 00:43:54 +0000 (0:00:00.301) 0:00:03.122 ****** 2026-01-08 00:43:58.113935 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82) 2026-01-08 00:43:58.113946 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82) 2026-01-08 00:43:58.113957 | orchestrator | 2026-01-08 00:43:58.113968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.113979 | orchestrator | Thursday 08 January 2026 00:43:54 +0000 (0:00:00.451) 0:00:03.573 ****** 2026-01-08 00:43:58.113989 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea) 2026-01-08 00:43:58.114093 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea) 2026-01-08 00:43:58.114119 | orchestrator | 2026-01-08 00:43:58.114138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.114158 | orchestrator | Thursday 08 January 2026 00:43:55 +0000 (0:00:00.460) 0:00:04.034 ****** 2026-01-08 00:43:58.114180 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb) 2026-01-08 00:43:58.114203 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb) 2026-01-08 00:43:58.114224 | orchestrator | 2026-01-08 00:43:58.114244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:43:58.114262 | orchestrator | Thursday 08 January 2026 00:43:55 +0000 (0:00:00.627) 0:00:04.662 ****** 2026-01-08 00:43:58.114284 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-08 00:43:58.114304 | orchestrator | 2026-01-08 00:43:58.114324 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:43:58.114340 | orchestrator | Thursday 08 January 2026 00:43:56 +0000 (0:00:00.311) 0:00:04.973 ****** 2026-01-08 00:43:58.114351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-08 00:43:58.114363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-08 00:43:58.114373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-08 00:43:58.114384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-08 00:43:58.114395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-08 00:43:58.114405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-08 00:43:58.114416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-08 00:43:58.114427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-08 00:43:58.114437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-08 00:43:58.114448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-08 00:43:58.114458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-08 00:43:58.114469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-08 00:43:58.114502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-08 00:43:58.114514 | orchestrator | 2026-01-08 00:43:58.114525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:43:58.114536 | orchestrator | Thursday 08 January 2026 00:43:56 +0000 (0:00:00.430) 0:00:05.404 ****** 2026-01-08 00:43:58.114546 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.114557 | orchestrator | 2026-01-08 00:43:58.114568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:43:58.114579 | orchestrator | Thursday 08 January 2026 00:43:56 +0000 (0:00:00.196) 0:00:05.600 ****** 2026-01-08 00:43:58.114589 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.114600 | orchestrator | 2026-01-08 00:43:58.114611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:43:58.114622 | orchestrator | Thursday 08 January 2026 00:43:57 +0000 (0:00:00.204) 0:00:05.805 ****** 2026-01-08 00:43:58.114632 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.114643 | orchestrator | 2026-01-08 00:43:58.114653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:43:58.114664 | orchestrator | Thursday 08 January 2026 00:43:57 +0000 (0:00:00.241) 0:00:06.046 ****** 2026-01-08 00:43:58.114675 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.114694 | orchestrator | 2026-01-08 00:43:58.114705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:43:58.114716 | orchestrator | Thursday 08 January 2026 00:43:57 +0000 (0:00:00.185) 0:00:06.231 ****** 2026-01-08 00:43:58.114726 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.114737 | orchestrator | 2026-01-08 00:43:58.114748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:43:58.114759 | orchestrator | Thursday 08 January 2026 00:43:57 +0000 (0:00:00.195) 0:00:06.426 ****** 2026-01-08 00:43:58.114769 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.114780 | orchestrator | 2026-01-08 00:43:58.114791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:43:58.114801 | orchestrator | Thursday 08 January 2026 00:43:57 +0000 (0:00:00.174) 0:00:06.601 ****** 2026-01-08 00:43:58.114812 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:43:58.114823 | orchestrator | 2026-01-08 00:43:58.114845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:07.021246 | orchestrator | Thursday 08 January 2026 00:43:58 +0000 (0:00:00.172) 0:00:06.773 ****** 2026-01-08 00:44:07.021353 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.021370 | orchestrator | 2026-01-08 00:44:07.021384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:07.021396 | orchestrator | Thursday 08 January 2026 00:43:58 +0000 (0:00:00.200) 0:00:06.974 ****** 2026-01-08 00:44:07.021408 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-08 00:44:07.021419 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-08 00:44:07.021431 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-08 00:44:07.021442 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-08 00:44:07.021453 | orchestrator | 2026-01-08 00:44:07.021465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:07.021569 | orchestrator | Thursday 08 January 2026 00:43:59 +0000 (0:00:00.839) 0:00:07.813 ****** 2026-01-08 00:44:07.021585 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.021596 | orchestrator | 2026-01-08 00:44:07.021608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:07.021619 | orchestrator | Thursday 08 January 2026 00:43:59 +0000 (0:00:00.199) 0:00:08.014 ****** 2026-01-08 00:44:07.021630 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.021642 | orchestrator | 2026-01-08 00:44:07.021653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:07.021664 | orchestrator | Thursday 08 January 2026 00:43:59 +0000 (0:00:00.178) 0:00:08.192 ****** 2026-01-08 00:44:07.021676 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.021687 | orchestrator | 2026-01-08 00:44:07.021698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:07.021710 | orchestrator | Thursday 08 January 2026 00:43:59 +0000 (0:00:00.188) 0:00:08.381 ****** 2026-01-08 00:44:07.021721 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.021732 | orchestrator | 2026-01-08 00:44:07.021743 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-08 00:44:07.021754 | orchestrator | Thursday 08 January 2026 00:43:59 +0000 (0:00:00.194) 0:00:08.575 ****** 2026-01-08 00:44:07.021765 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.021776 | orchestrator | 2026-01-08 00:44:07.021790 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-08 00:44:07.021805 | orchestrator | Thursday 08 January 2026 00:44:00 +0000 (0:00:00.130) 0:00:08.706 ****** 2026-01-08 00:44:07.021839 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a2587794-ee13-56a9-b71d-149b2fd55b33'}}) 2026-01-08 00:44:07.021853 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '703f1367-865b-52a8-8f96-c728fe171d20'}}) 2026-01-08 00:44:07.021868 | orchestrator | 2026-01-08 00:44:07.021881 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-08 00:44:07.021917 | orchestrator | Thursday 08 January 2026 00:44:00 +0000 (0:00:00.191) 0:00:08.897 ****** 2026-01-08 00:44:07.021933 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'}) 2026-01-08 00:44:07.021948 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'}) 2026-01-08 00:44:07.021961 | orchestrator | 2026-01-08 00:44:07.021980 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-08 00:44:07.021994 | orchestrator | Thursday 08 January 2026 00:44:03 +0000 (0:00:03.068) 0:00:11.965 ****** 2026-01-08 00:44:07.022008 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:07.022091 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:07.022105 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022118 | orchestrator | 2026-01-08 00:44:07.022165 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-08 00:44:07.022180 | orchestrator | Thursday 08 January 2026 00:44:03 +0000 (0:00:00.168) 0:00:12.133 ****** 2026-01-08 00:44:07.022194 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'}) 2026-01-08 00:44:07.022205 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'}) 2026-01-08 00:44:07.022216 | orchestrator | 2026-01-08 00:44:07.022228 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-08 00:44:07.022239 | orchestrator | Thursday 08 January 2026 00:44:04 +0000 (0:00:01.428) 0:00:13.562 ****** 2026-01-08 00:44:07.022251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:07.022262 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:07.022273 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022284 | orchestrator | 2026-01-08 00:44:07.022295 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-08 00:44:07.022308 | orchestrator | Thursday 08 January 2026 00:44:05 +0000 (0:00:00.156) 0:00:13.719 ****** 2026-01-08 00:44:07.022350 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022371 | orchestrator | 2026-01-08 00:44:07.022391 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-08 00:44:07.022409 | orchestrator | Thursday 08 January 2026 00:44:05 +0000 (0:00:00.174) 0:00:13.894 ****** 2026-01-08 00:44:07.022427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:07.022439 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:07.022450 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022461 | orchestrator | 2026-01-08 00:44:07.022472 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-08 00:44:07.022510 | orchestrator | Thursday 08 January 2026 00:44:05 +0000 (0:00:00.356) 0:00:14.250 ****** 2026-01-08 00:44:07.022521 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022532 | orchestrator | 2026-01-08 00:44:07.022543 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-08 00:44:07.022555 | orchestrator | Thursday 08 January 2026 00:44:05 +0000 (0:00:00.135) 0:00:14.386 ****** 2026-01-08 00:44:07.022578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:07.022589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:07.022600 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022611 | orchestrator | 2026-01-08 00:44:07.022622 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-08 00:44:07.022633 | orchestrator | Thursday 08 January 2026 00:44:05 +0000 (0:00:00.168) 0:00:14.554 ****** 2026-01-08 00:44:07.022644 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022655 | orchestrator | 2026-01-08 00:44:07.022666 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-08 00:44:07.022677 | orchestrator | Thursday 08 January 2026 00:44:06 +0000 (0:00:00.173) 0:00:14.728 ****** 2026-01-08 00:44:07.022688 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:07.022699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:07.022711 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022722 | orchestrator | 2026-01-08 00:44:07.022733 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-08 00:44:07.022744 | orchestrator | Thursday 08 January 2026 00:44:06 +0000 (0:00:00.164) 0:00:14.893 ****** 2026-01-08 00:44:07.022755 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:44:07.022766 | orchestrator | 2026-01-08 00:44:07.022777 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-08 00:44:07.022788 | orchestrator | Thursday 08 January 2026 00:44:06 +0000 (0:00:00.149) 0:00:15.042 ****** 2026-01-08 00:44:07.022805 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:07.022817 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:07.022828 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022839 | orchestrator | 2026-01-08 00:44:07.022850 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-08 00:44:07.022861 | orchestrator | Thursday 08 January 2026 00:44:06 +0000 (0:00:00.157) 0:00:15.200 ****** 2026-01-08 00:44:07.022872 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:07.022883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:07.022894 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022905 | orchestrator | 2026-01-08 00:44:07.022916 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-08 00:44:07.022927 | orchestrator | Thursday 08 January 2026 00:44:06 +0000 (0:00:00.161) 0:00:15.361 ****** 2026-01-08 00:44:07.022938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:07.022949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:07.022960 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.022971 | orchestrator | 2026-01-08 00:44:07.022982 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-08 00:44:07.023008 | orchestrator | Thursday 08 January 2026 00:44:06 +0000 (0:00:00.169) 0:00:15.530 ****** 2026-01-08 00:44:07.023020 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:07.023030 | orchestrator | 2026-01-08 00:44:07.023041 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-08 00:44:07.023061 | orchestrator | Thursday 08 January 2026 00:44:07 +0000 (0:00:00.149) 0:00:15.680 ****** 2026-01-08 00:44:14.224361 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.224543 | orchestrator | 2026-01-08 00:44:14.224556 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-08 00:44:14.224565 | orchestrator | Thursday 08 January 2026 00:44:07 +0000 (0:00:00.132) 0:00:15.813 ****** 2026-01-08 00:44:14.224573 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.224580 | orchestrator | 2026-01-08 00:44:14.224587 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-08 00:44:14.224594 | orchestrator | Thursday 08 January 2026 00:44:07 +0000 (0:00:00.153) 0:00:15.966 ****** 2026-01-08 00:44:14.224602 | orchestrator | ok: [testbed-node-3] => { 2026-01-08 00:44:14.224610 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-08 00:44:14.224617 | orchestrator | } 2026-01-08 00:44:14.224624 | orchestrator | 2026-01-08 00:44:14.224631 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-08 00:44:14.224638 | orchestrator | Thursday 08 January 2026 00:44:07 +0000 (0:00:00.416) 0:00:16.383 ****** 2026-01-08 00:44:14.224645 | orchestrator | ok: [testbed-node-3] => { 2026-01-08 00:44:14.224651 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-08 00:44:14.224658 | orchestrator | } 2026-01-08 00:44:14.224665 | orchestrator | 2026-01-08 00:44:14.224672 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-08 00:44:14.224679 | orchestrator | Thursday 08 January 2026 00:44:07 +0000 (0:00:00.168) 0:00:16.551 ****** 2026-01-08 00:44:14.224686 | orchestrator | ok: [testbed-node-3] => { 2026-01-08 00:44:14.224694 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-08 00:44:14.224701 | orchestrator | } 2026-01-08 00:44:14.224707 | orchestrator | 2026-01-08 00:44:14.224714 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-08 00:44:14.224721 | orchestrator | Thursday 08 January 2026 00:44:08 +0000 (0:00:00.154) 0:00:16.705 ****** 2026-01-08 00:44:14.224728 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:44:14.224735 | orchestrator | 2026-01-08 00:44:14.224741 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-08 00:44:14.224748 | orchestrator | Thursday 08 January 2026 00:44:08 +0000 (0:00:00.788) 0:00:17.493 ****** 2026-01-08 00:44:14.224755 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:44:14.224761 | orchestrator | 2026-01-08 00:44:14.224768 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-08 00:44:14.224775 | orchestrator | Thursday 08 January 2026 00:44:09 +0000 (0:00:00.524) 0:00:18.018 ****** 2026-01-08 00:44:14.224782 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:44:14.224789 | orchestrator | 2026-01-08 00:44:14.224795 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-08 00:44:14.224802 | orchestrator | Thursday 08 January 2026 00:44:09 +0000 (0:00:00.525) 0:00:18.544 ****** 2026-01-08 00:44:14.224809 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:44:14.224816 | orchestrator | 2026-01-08 00:44:14.224823 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-08 00:44:14.224830 | orchestrator | Thursday 08 January 2026 00:44:10 +0000 (0:00:00.159) 0:00:18.703 ****** 2026-01-08 00:44:14.224836 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.224843 | orchestrator | 2026-01-08 00:44:14.224850 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-08 00:44:14.224857 | orchestrator | Thursday 08 January 2026 00:44:10 +0000 (0:00:00.119) 0:00:18.823 ****** 2026-01-08 00:44:14.224863 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.224870 | orchestrator | 2026-01-08 00:44:14.224877 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-08 00:44:14.224907 | orchestrator | Thursday 08 January 2026 00:44:10 +0000 (0:00:00.123) 0:00:18.946 ****** 2026-01-08 00:44:14.224916 | orchestrator | ok: [testbed-node-3] => { 2026-01-08 00:44:14.224924 | orchestrator |  "vgs_report": { 2026-01-08 00:44:14.224932 | orchestrator |  "vg": [] 2026-01-08 00:44:14.224940 | orchestrator |  } 2026-01-08 00:44:14.224948 | orchestrator | } 2026-01-08 00:44:14.224956 | orchestrator | 2026-01-08 00:44:14.224964 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-08 00:44:14.224971 | orchestrator | Thursday 08 January 2026 00:44:10 +0000 (0:00:00.188) 0:00:19.135 ****** 2026-01-08 00:44:14.224979 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.224986 | orchestrator | 2026-01-08 00:44:14.224994 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-08 00:44:14.225020 | orchestrator | Thursday 08 January 2026 00:44:10 +0000 (0:00:00.161) 0:00:19.297 ****** 2026-01-08 00:44:14.225028 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225036 | orchestrator | 2026-01-08 00:44:14.225044 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-08 00:44:14.225052 | orchestrator | Thursday 08 January 2026 00:44:10 +0000 (0:00:00.153) 0:00:19.450 ****** 2026-01-08 00:44:14.225059 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225066 | orchestrator | 2026-01-08 00:44:14.225074 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-08 00:44:14.225082 | orchestrator | Thursday 08 January 2026 00:44:11 +0000 (0:00:00.441) 0:00:19.892 ****** 2026-01-08 00:44:14.225089 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225096 | orchestrator | 2026-01-08 00:44:14.225104 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-08 00:44:14.225112 | orchestrator | Thursday 08 January 2026 00:44:11 +0000 (0:00:00.163) 0:00:20.055 ****** 2026-01-08 00:44:14.225120 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225128 | orchestrator | 2026-01-08 00:44:14.225136 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-08 00:44:14.225144 | orchestrator | Thursday 08 January 2026 00:44:11 +0000 (0:00:00.163) 0:00:20.219 ****** 2026-01-08 00:44:14.225151 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225159 | orchestrator | 2026-01-08 00:44:14.225166 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-08 00:44:14.225174 | orchestrator | Thursday 08 January 2026 00:44:11 +0000 (0:00:00.144) 0:00:20.364 ****** 2026-01-08 00:44:14.225182 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225189 | orchestrator | 2026-01-08 00:44:14.225197 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-08 00:44:14.225206 | orchestrator | Thursday 08 January 2026 00:44:11 +0000 (0:00:00.135) 0:00:20.499 ****** 2026-01-08 00:44:14.225229 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225237 | orchestrator | 2026-01-08 00:44:14.225245 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-08 00:44:14.225253 | orchestrator | Thursday 08 January 2026 00:44:11 +0000 (0:00:00.149) 0:00:20.649 ****** 2026-01-08 00:44:14.225261 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225268 | orchestrator | 2026-01-08 00:44:14.225277 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-08 00:44:14.225284 | orchestrator | Thursday 08 January 2026 00:44:12 +0000 (0:00:00.146) 0:00:20.796 ****** 2026-01-08 00:44:14.225291 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225298 | orchestrator | 2026-01-08 00:44:14.225305 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-08 00:44:14.225312 | orchestrator | Thursday 08 January 2026 00:44:12 +0000 (0:00:00.132) 0:00:20.928 ****** 2026-01-08 00:44:14.225318 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225325 | orchestrator | 2026-01-08 00:44:14.225332 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-08 00:44:14.225338 | orchestrator | Thursday 08 January 2026 00:44:12 +0000 (0:00:00.149) 0:00:21.078 ****** 2026-01-08 00:44:14.225351 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225358 | orchestrator | 2026-01-08 00:44:14.225365 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-08 00:44:14.225371 | orchestrator | Thursday 08 January 2026 00:44:12 +0000 (0:00:00.179) 0:00:21.258 ****** 2026-01-08 00:44:14.225378 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225385 | orchestrator | 2026-01-08 00:44:14.225391 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-08 00:44:14.225398 | orchestrator | Thursday 08 January 2026 00:44:12 +0000 (0:00:00.158) 0:00:21.416 ****** 2026-01-08 00:44:14.225405 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225411 | orchestrator | 2026-01-08 00:44:14.225418 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-08 00:44:14.225425 | orchestrator | Thursday 08 January 2026 00:44:12 +0000 (0:00:00.169) 0:00:21.586 ****** 2026-01-08 00:44:14.225433 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:14.225442 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:14.225449 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225455 | orchestrator | 2026-01-08 00:44:14.225462 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-08 00:44:14.225484 | orchestrator | Thursday 08 January 2026 00:44:13 +0000 (0:00:00.426) 0:00:22.013 ****** 2026-01-08 00:44:14.225491 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:14.225498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:14.225505 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225511 | orchestrator | 2026-01-08 00:44:14.225518 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-08 00:44:14.225529 | orchestrator | Thursday 08 January 2026 00:44:13 +0000 (0:00:00.161) 0:00:22.174 ****** 2026-01-08 00:44:14.225536 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:14.225543 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:14.225549 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225556 | orchestrator | 2026-01-08 00:44:14.225563 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-08 00:44:14.225570 | orchestrator | Thursday 08 January 2026 00:44:13 +0000 (0:00:00.191) 0:00:22.366 ****** 2026-01-08 00:44:14.225576 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:14.225583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:14.225590 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225597 | orchestrator | 2026-01-08 00:44:14.225603 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-08 00:44:14.225610 | orchestrator | Thursday 08 January 2026 00:44:13 +0000 (0:00:00.169) 0:00:22.535 ****** 2026-01-08 00:44:14.225617 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:14.225623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:14.225635 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:14.225641 | orchestrator | 2026-01-08 00:44:14.225648 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-08 00:44:14.225655 | orchestrator | Thursday 08 January 2026 00:44:14 +0000 (0:00:00.173) 0:00:22.708 ****** 2026-01-08 00:44:14.225666 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:20.050267 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:20.050374 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:20.050389 | orchestrator | 2026-01-08 00:44:20.050401 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-08 00:44:20.050413 | orchestrator | Thursday 08 January 2026 00:44:14 +0000 (0:00:00.177) 0:00:22.886 ****** 2026-01-08 00:44:20.050423 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:20.050434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:20.050445 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:20.050455 | orchestrator | 2026-01-08 00:44:20.050465 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-08 00:44:20.050518 | orchestrator | Thursday 08 January 2026 00:44:14 +0000 (0:00:00.175) 0:00:23.062 ****** 2026-01-08 00:44:20.050529 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:20.050539 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:20.050549 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:20.050559 | orchestrator | 2026-01-08 00:44:20.050569 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-08 00:44:20.050579 | orchestrator | Thursday 08 January 2026 00:44:14 +0000 (0:00:00.179) 0:00:23.241 ****** 2026-01-08 00:44:20.050589 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:44:20.050600 | orchestrator | 2026-01-08 00:44:20.050610 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-08 00:44:20.050620 | orchestrator | Thursday 08 January 2026 00:44:15 +0000 (0:00:00.571) 0:00:23.812 ****** 2026-01-08 00:44:20.050630 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:44:20.050640 | orchestrator | 2026-01-08 00:44:20.050650 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-08 00:44:20.050660 | orchestrator | Thursday 08 January 2026 00:44:15 +0000 (0:00:00.643) 0:00:24.456 ****** 2026-01-08 00:44:20.050669 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:44:20.050679 | orchestrator | 2026-01-08 00:44:20.050689 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-08 00:44:20.050699 | orchestrator | Thursday 08 January 2026 00:44:15 +0000 (0:00:00.153) 0:00:24.609 ****** 2026-01-08 00:44:20.050709 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'vg_name': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'}) 2026-01-08 00:44:20.050720 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'vg_name': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'}) 2026-01-08 00:44:20.050730 | orchestrator | 2026-01-08 00:44:20.050740 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-08 00:44:20.050750 | orchestrator | Thursday 08 January 2026 00:44:16 +0000 (0:00:00.205) 0:00:24.815 ****** 2026-01-08 00:44:20.050783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:20.050796 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:20.050808 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:20.050819 | orchestrator | 2026-01-08 00:44:20.050831 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-08 00:44:20.050843 | orchestrator | Thursday 08 January 2026 00:44:16 +0000 (0:00:00.394) 0:00:25.210 ****** 2026-01-08 00:44:20.050855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:20.050866 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:20.050878 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:20.050890 | orchestrator | 2026-01-08 00:44:20.050902 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-08 00:44:20.050913 | orchestrator | Thursday 08 January 2026 00:44:16 +0000 (0:00:00.156) 0:00:25.366 ****** 2026-01-08 00:44:20.050924 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'})  2026-01-08 00:44:20.050935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'})  2026-01-08 00:44:20.050947 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:44:20.050958 | orchestrator | 2026-01-08 00:44:20.050970 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-08 00:44:20.050982 | orchestrator | Thursday 08 January 2026 00:44:16 +0000 (0:00:00.175) 0:00:25.542 ****** 2026-01-08 00:44:20.051009 | orchestrator | ok: [testbed-node-3] => { 2026-01-08 00:44:20.051022 | orchestrator |  "lvm_report": { 2026-01-08 00:44:20.051034 | orchestrator |  "lv": [ 2026-01-08 00:44:20.051045 | orchestrator |  { 2026-01-08 00:44:20.051055 | orchestrator |  "lv_name": "osd-block-703f1367-865b-52a8-8f96-c728fe171d20", 2026-01-08 00:44:20.051066 | orchestrator |  "vg_name": "ceph-703f1367-865b-52a8-8f96-c728fe171d20" 2026-01-08 00:44:20.051075 | orchestrator |  }, 2026-01-08 00:44:20.051085 | orchestrator |  { 2026-01-08 00:44:20.051095 | orchestrator |  "lv_name": "osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33", 2026-01-08 00:44:20.051105 | orchestrator |  "vg_name": "ceph-a2587794-ee13-56a9-b71d-149b2fd55b33" 2026-01-08 00:44:20.051114 | orchestrator |  } 2026-01-08 00:44:20.051124 | orchestrator |  ], 2026-01-08 00:44:20.051134 | orchestrator |  "pv": [ 2026-01-08 00:44:20.051143 | orchestrator |  { 2026-01-08 00:44:20.051153 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-08 00:44:20.051163 | orchestrator |  "vg_name": "ceph-a2587794-ee13-56a9-b71d-149b2fd55b33" 2026-01-08 00:44:20.051172 | orchestrator |  }, 2026-01-08 00:44:20.051182 | orchestrator |  { 2026-01-08 00:44:20.051192 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-08 00:44:20.051202 | orchestrator |  "vg_name": "ceph-703f1367-865b-52a8-8f96-c728fe171d20" 2026-01-08 00:44:20.051228 | orchestrator |  } 2026-01-08 00:44:20.051239 | orchestrator |  ] 2026-01-08 00:44:20.051249 | orchestrator |  } 2026-01-08 00:44:20.051258 | orchestrator | } 2026-01-08 00:44:20.051269 | orchestrator | 2026-01-08 00:44:20.051278 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-08 00:44:20.051288 | orchestrator | 2026-01-08 00:44:20.051298 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-08 00:44:20.051315 | orchestrator | Thursday 08 January 2026 00:44:17 +0000 (0:00:00.297) 0:00:25.839 ****** 2026-01-08 00:44:20.051325 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-08 00:44:20.051335 | orchestrator | 2026-01-08 00:44:20.051344 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-08 00:44:20.051354 | orchestrator | Thursday 08 January 2026 00:44:17 +0000 (0:00:00.272) 0:00:26.112 ****** 2026-01-08 00:44:20.051364 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:44:20.051374 | orchestrator | 2026-01-08 00:44:20.051384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:20.051393 | orchestrator | Thursday 08 January 2026 00:44:17 +0000 (0:00:00.247) 0:00:26.359 ****** 2026-01-08 00:44:20.051403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-08 00:44:20.051413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-08 00:44:20.051423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-08 00:44:20.051432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-08 00:44:20.051442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-08 00:44:20.051452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-08 00:44:20.051485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-08 00:44:20.051496 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-08 00:44:20.051506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-08 00:44:20.051515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-08 00:44:20.051525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-08 00:44:20.051535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-08 00:44:20.051545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-08 00:44:20.051554 | orchestrator | 2026-01-08 00:44:20.051564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:20.051574 | orchestrator | Thursday 08 January 2026 00:44:18 +0000 (0:00:00.440) 0:00:26.800 ****** 2026-01-08 00:44:20.051584 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:20.051593 | orchestrator | 2026-01-08 00:44:20.051603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:20.051613 | orchestrator | Thursday 08 January 2026 00:44:18 +0000 (0:00:00.202) 0:00:27.002 ****** 2026-01-08 00:44:20.051622 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:20.051632 | orchestrator | 2026-01-08 00:44:20.051642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:20.051652 | orchestrator | Thursday 08 January 2026 00:44:18 +0000 (0:00:00.214) 0:00:27.217 ****** 2026-01-08 00:44:20.051661 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:20.051671 | orchestrator | 2026-01-08 00:44:20.051681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:20.051691 | orchestrator | Thursday 08 January 2026 00:44:19 +0000 (0:00:00.759) 0:00:27.976 ****** 2026-01-08 00:44:20.051700 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:20.051710 | orchestrator | 2026-01-08 00:44:20.051720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:20.051730 | orchestrator | Thursday 08 January 2026 00:44:19 +0000 (0:00:00.236) 0:00:28.213 ****** 2026-01-08 00:44:20.051740 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:20.051749 | orchestrator | 2026-01-08 00:44:20.051759 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:20.051775 | orchestrator | Thursday 08 January 2026 00:44:19 +0000 (0:00:00.268) 0:00:28.481 ****** 2026-01-08 00:44:20.051785 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:20.051795 | orchestrator | 2026-01-08 00:44:20.051811 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:31.636982 | orchestrator | Thursday 08 January 2026 00:44:20 +0000 (0:00:00.226) 0:00:28.707 ****** 2026-01-08 00:44:31.637150 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.637177 | orchestrator | 2026-01-08 00:44:31.637199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:31.637218 | orchestrator | Thursday 08 January 2026 00:44:20 +0000 (0:00:00.220) 0:00:28.927 ****** 2026-01-08 00:44:31.637237 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.637249 | orchestrator | 2026-01-08 00:44:31.637261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:31.637273 | orchestrator | Thursday 08 January 2026 00:44:20 +0000 (0:00:00.266) 0:00:29.194 ****** 2026-01-08 00:44:31.637284 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1) 2026-01-08 00:44:31.637298 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1) 2026-01-08 00:44:31.637309 | orchestrator | 2026-01-08 00:44:31.637320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:31.637331 | orchestrator | Thursday 08 January 2026 00:44:21 +0000 (0:00:00.523) 0:00:29.717 ****** 2026-01-08 00:44:31.637342 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b) 2026-01-08 00:44:31.637353 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b) 2026-01-08 00:44:31.637364 | orchestrator | 2026-01-08 00:44:31.637375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:31.637386 | orchestrator | Thursday 08 January 2026 00:44:21 +0000 (0:00:00.471) 0:00:30.189 ****** 2026-01-08 00:44:31.637397 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181) 2026-01-08 00:44:31.637407 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181) 2026-01-08 00:44:31.637418 | orchestrator | 2026-01-08 00:44:31.637429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:31.637440 | orchestrator | Thursday 08 January 2026 00:44:22 +0000 (0:00:00.536) 0:00:30.726 ****** 2026-01-08 00:44:31.637451 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd) 2026-01-08 00:44:31.637496 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd) 2026-01-08 00:44:31.637509 | orchestrator | 2026-01-08 00:44:31.637522 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:31.637535 | orchestrator | Thursday 08 January 2026 00:44:22 +0000 (0:00:00.699) 0:00:31.425 ****** 2026-01-08 00:44:31.637547 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-08 00:44:31.637559 | orchestrator | 2026-01-08 00:44:31.637573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.637586 | orchestrator | Thursday 08 January 2026 00:44:23 +0000 (0:00:00.580) 0:00:32.006 ****** 2026-01-08 00:44:31.637620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-08 00:44:31.637633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-08 00:44:31.637647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-08 00:44:31.637660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-08 00:44:31.637672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-08 00:44:31.637714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-08 00:44:31.637727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-08 00:44:31.637740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-08 00:44:31.637752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-08 00:44:31.637764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-08 00:44:31.637776 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-08 00:44:31.637788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-08 00:44:31.637801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-08 00:44:31.637813 | orchestrator | 2026-01-08 00:44:31.637826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.637837 | orchestrator | Thursday 08 January 2026 00:44:23 +0000 (0:00:00.658) 0:00:32.664 ****** 2026-01-08 00:44:31.637848 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.637859 | orchestrator | 2026-01-08 00:44:31.637870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.637881 | orchestrator | Thursday 08 January 2026 00:44:24 +0000 (0:00:00.206) 0:00:32.870 ****** 2026-01-08 00:44:31.637892 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.637903 | orchestrator | 2026-01-08 00:44:31.637914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.637925 | orchestrator | Thursday 08 January 2026 00:44:24 +0000 (0:00:00.203) 0:00:33.074 ****** 2026-01-08 00:44:31.637936 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.637947 | orchestrator | 2026-01-08 00:44:31.637979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.637992 | orchestrator | Thursday 08 January 2026 00:44:24 +0000 (0:00:00.212) 0:00:33.287 ****** 2026-01-08 00:44:31.638003 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638081 | orchestrator | 2026-01-08 00:44:31.638094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.638105 | orchestrator | Thursday 08 January 2026 00:44:24 +0000 (0:00:00.209) 0:00:33.496 ****** 2026-01-08 00:44:31.638116 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638127 | orchestrator | 2026-01-08 00:44:31.638138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.638149 | orchestrator | Thursday 08 January 2026 00:44:25 +0000 (0:00:00.211) 0:00:33.707 ****** 2026-01-08 00:44:31.638160 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638170 | orchestrator | 2026-01-08 00:44:31.638182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.638192 | orchestrator | Thursday 08 January 2026 00:44:25 +0000 (0:00:00.200) 0:00:33.907 ****** 2026-01-08 00:44:31.638203 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638214 | orchestrator | 2026-01-08 00:44:31.638225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.638236 | orchestrator | Thursday 08 January 2026 00:44:25 +0000 (0:00:00.203) 0:00:34.111 ****** 2026-01-08 00:44:31.638247 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638258 | orchestrator | 2026-01-08 00:44:31.638269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.638280 | orchestrator | Thursday 08 January 2026 00:44:25 +0000 (0:00:00.210) 0:00:34.321 ****** 2026-01-08 00:44:31.638291 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-08 00:44:31.638302 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-08 00:44:31.638314 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-08 00:44:31.638325 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-08 00:44:31.638345 | orchestrator | 2026-01-08 00:44:31.638356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.638367 | orchestrator | Thursday 08 January 2026 00:44:26 +0000 (0:00:00.875) 0:00:35.196 ****** 2026-01-08 00:44:31.638378 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638389 | orchestrator | 2026-01-08 00:44:31.638400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.638411 | orchestrator | Thursday 08 January 2026 00:44:26 +0000 (0:00:00.220) 0:00:35.417 ****** 2026-01-08 00:44:31.638422 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638433 | orchestrator | 2026-01-08 00:44:31.638444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.638455 | orchestrator | Thursday 08 January 2026 00:44:27 +0000 (0:00:00.719) 0:00:36.136 ****** 2026-01-08 00:44:31.638484 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638495 | orchestrator | 2026-01-08 00:44:31.638506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:31.638517 | orchestrator | Thursday 08 January 2026 00:44:27 +0000 (0:00:00.254) 0:00:36.391 ****** 2026-01-08 00:44:31.638528 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638539 | orchestrator | 2026-01-08 00:44:31.638549 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-08 00:44:31.638561 | orchestrator | Thursday 08 January 2026 00:44:27 +0000 (0:00:00.217) 0:00:36.608 ****** 2026-01-08 00:44:31.638572 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638582 | orchestrator | 2026-01-08 00:44:31.638598 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-08 00:44:31.638617 | orchestrator | Thursday 08 January 2026 00:44:28 +0000 (0:00:00.133) 0:00:36.741 ****** 2026-01-08 00:44:31.638647 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '738668c3-85d9-5999-8ba6-58353e2d69fe'}}) 2026-01-08 00:44:31.638669 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '3efd50ac-0c86-56a3-96dd-80e79744aaab'}}) 2026-01-08 00:44:31.638687 | orchestrator | 2026-01-08 00:44:31.638707 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-08 00:44:31.638726 | orchestrator | Thursday 08 January 2026 00:44:28 +0000 (0:00:00.196) 0:00:36.938 ****** 2026-01-08 00:44:31.638749 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'}) 2026-01-08 00:44:31.638772 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'}) 2026-01-08 00:44:31.638791 | orchestrator | 2026-01-08 00:44:31.638810 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-08 00:44:31.638830 | orchestrator | Thursday 08 January 2026 00:44:30 +0000 (0:00:01.843) 0:00:38.782 ****** 2026-01-08 00:44:31.638846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:31.638860 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:31.638871 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:31.638882 | orchestrator | 2026-01-08 00:44:31.638893 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-08 00:44:31.638904 | orchestrator | Thursday 08 January 2026 00:44:30 +0000 (0:00:00.154) 0:00:38.936 ****** 2026-01-08 00:44:31.638915 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'}) 2026-01-08 00:44:31.638938 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'}) 2026-01-08 00:44:37.541081 | orchestrator | 2026-01-08 00:44:37.541494 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-08 00:44:37.541508 | orchestrator | Thursday 08 January 2026 00:44:31 +0000 (0:00:01.358) 0:00:40.295 ****** 2026-01-08 00:44:37.541528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:37.541535 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:37.541539 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541544 | orchestrator | 2026-01-08 00:44:37.541549 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-08 00:44:37.541553 | orchestrator | Thursday 08 January 2026 00:44:31 +0000 (0:00:00.160) 0:00:40.456 ****** 2026-01-08 00:44:37.541557 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541561 | orchestrator | 2026-01-08 00:44:37.541565 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-08 00:44:37.541569 | orchestrator | Thursday 08 January 2026 00:44:31 +0000 (0:00:00.128) 0:00:40.585 ****** 2026-01-08 00:44:37.541573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:37.541577 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:37.541581 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541585 | orchestrator | 2026-01-08 00:44:37.541588 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-08 00:44:37.541592 | orchestrator | Thursday 08 January 2026 00:44:32 +0000 (0:00:00.154) 0:00:40.739 ****** 2026-01-08 00:44:37.541596 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541600 | orchestrator | 2026-01-08 00:44:37.541604 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-08 00:44:37.541608 | orchestrator | Thursday 08 January 2026 00:44:32 +0000 (0:00:00.137) 0:00:40.877 ****** 2026-01-08 00:44:37.541612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:37.541617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:37.541621 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541625 | orchestrator | 2026-01-08 00:44:37.541630 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-08 00:44:37.541637 | orchestrator | Thursday 08 January 2026 00:44:32 +0000 (0:00:00.423) 0:00:41.300 ****** 2026-01-08 00:44:37.541641 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541646 | orchestrator | 2026-01-08 00:44:37.541650 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-08 00:44:37.541655 | orchestrator | Thursday 08 January 2026 00:44:32 +0000 (0:00:00.175) 0:00:41.476 ****** 2026-01-08 00:44:37.541659 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:37.541663 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:37.541668 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541672 | orchestrator | 2026-01-08 00:44:37.541677 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-08 00:44:37.541681 | orchestrator | Thursday 08 January 2026 00:44:32 +0000 (0:00:00.151) 0:00:41.628 ****** 2026-01-08 00:44:37.541685 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:44:37.541706 | orchestrator | 2026-01-08 00:44:37.541711 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-08 00:44:37.541715 | orchestrator | Thursday 08 January 2026 00:44:33 +0000 (0:00:00.178) 0:00:41.806 ****** 2026-01-08 00:44:37.541720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:37.541724 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:37.541729 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541733 | orchestrator | 2026-01-08 00:44:37.541738 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-08 00:44:37.541742 | orchestrator | Thursday 08 January 2026 00:44:33 +0000 (0:00:00.170) 0:00:41.976 ****** 2026-01-08 00:44:37.541746 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:37.541751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:37.541755 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541759 | orchestrator | 2026-01-08 00:44:37.541764 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-08 00:44:37.541781 | orchestrator | Thursday 08 January 2026 00:44:33 +0000 (0:00:00.149) 0:00:42.126 ****** 2026-01-08 00:44:37.541785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:37.541790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:37.541795 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541799 | orchestrator | 2026-01-08 00:44:37.541803 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-08 00:44:37.541806 | orchestrator | Thursday 08 January 2026 00:44:33 +0000 (0:00:00.231) 0:00:42.357 ****** 2026-01-08 00:44:37.541810 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541814 | orchestrator | 2026-01-08 00:44:37.541818 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-08 00:44:37.541822 | orchestrator | Thursday 08 January 2026 00:44:33 +0000 (0:00:00.158) 0:00:42.516 ****** 2026-01-08 00:44:37.541826 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541829 | orchestrator | 2026-01-08 00:44:37.541833 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-08 00:44:37.541837 | orchestrator | Thursday 08 January 2026 00:44:33 +0000 (0:00:00.141) 0:00:42.658 ****** 2026-01-08 00:44:37.541841 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541845 | orchestrator | 2026-01-08 00:44:37.541848 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-08 00:44:37.541852 | orchestrator | Thursday 08 January 2026 00:44:34 +0000 (0:00:00.149) 0:00:42.807 ****** 2026-01-08 00:44:37.541856 | orchestrator | ok: [testbed-node-4] => { 2026-01-08 00:44:37.541860 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-08 00:44:37.541864 | orchestrator | } 2026-01-08 00:44:37.541868 | orchestrator | 2026-01-08 00:44:37.541872 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-08 00:44:37.541876 | orchestrator | Thursday 08 January 2026 00:44:34 +0000 (0:00:00.155) 0:00:42.963 ****** 2026-01-08 00:44:37.541879 | orchestrator | ok: [testbed-node-4] => { 2026-01-08 00:44:37.541883 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-08 00:44:37.541887 | orchestrator | } 2026-01-08 00:44:37.541891 | orchestrator | 2026-01-08 00:44:37.541894 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-08 00:44:37.541898 | orchestrator | Thursday 08 January 2026 00:44:34 +0000 (0:00:00.142) 0:00:43.106 ****** 2026-01-08 00:44:37.541906 | orchestrator | ok: [testbed-node-4] => { 2026-01-08 00:44:37.541910 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-08 00:44:37.541913 | orchestrator | } 2026-01-08 00:44:37.541917 | orchestrator | 2026-01-08 00:44:37.541921 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-08 00:44:37.541925 | orchestrator | Thursday 08 January 2026 00:44:34 +0000 (0:00:00.388) 0:00:43.494 ****** 2026-01-08 00:44:37.541929 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:44:37.541932 | orchestrator | 2026-01-08 00:44:37.541936 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-08 00:44:37.541943 | orchestrator | Thursday 08 January 2026 00:44:35 +0000 (0:00:00.506) 0:00:44.000 ****** 2026-01-08 00:44:37.541946 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:44:37.541950 | orchestrator | 2026-01-08 00:44:37.541954 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-08 00:44:37.541958 | orchestrator | Thursday 08 January 2026 00:44:35 +0000 (0:00:00.507) 0:00:44.508 ****** 2026-01-08 00:44:37.541962 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:44:37.541965 | orchestrator | 2026-01-08 00:44:37.541969 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-08 00:44:37.541973 | orchestrator | Thursday 08 January 2026 00:44:36 +0000 (0:00:00.539) 0:00:45.048 ****** 2026-01-08 00:44:37.541977 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:44:37.541981 | orchestrator | 2026-01-08 00:44:37.541984 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-08 00:44:37.541988 | orchestrator | Thursday 08 January 2026 00:44:36 +0000 (0:00:00.171) 0:00:45.219 ****** 2026-01-08 00:44:37.541992 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.541996 | orchestrator | 2026-01-08 00:44:37.542000 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-08 00:44:37.542003 | orchestrator | Thursday 08 January 2026 00:44:36 +0000 (0:00:00.107) 0:00:45.327 ****** 2026-01-08 00:44:37.542007 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.542011 | orchestrator | 2026-01-08 00:44:37.542070 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-08 00:44:37.542074 | orchestrator | Thursday 08 January 2026 00:44:36 +0000 (0:00:00.131) 0:00:45.458 ****** 2026-01-08 00:44:37.542078 | orchestrator | ok: [testbed-node-4] => { 2026-01-08 00:44:37.542082 | orchestrator |  "vgs_report": { 2026-01-08 00:44:37.542086 | orchestrator |  "vg": [] 2026-01-08 00:44:37.542090 | orchestrator |  } 2026-01-08 00:44:37.542094 | orchestrator | } 2026-01-08 00:44:37.542098 | orchestrator | 2026-01-08 00:44:37.542101 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-08 00:44:37.542105 | orchestrator | Thursday 08 January 2026 00:44:36 +0000 (0:00:00.146) 0:00:45.605 ****** 2026-01-08 00:44:37.542109 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.542113 | orchestrator | 2026-01-08 00:44:37.542117 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-08 00:44:37.542120 | orchestrator | Thursday 08 January 2026 00:44:37 +0000 (0:00:00.153) 0:00:45.759 ****** 2026-01-08 00:44:37.542124 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.542128 | orchestrator | 2026-01-08 00:44:37.542132 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-08 00:44:37.542135 | orchestrator | Thursday 08 January 2026 00:44:37 +0000 (0:00:00.157) 0:00:45.917 ****** 2026-01-08 00:44:37.542141 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.542147 | orchestrator | 2026-01-08 00:44:37.542153 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-08 00:44:37.542159 | orchestrator | Thursday 08 January 2026 00:44:37 +0000 (0:00:00.148) 0:00:46.065 ****** 2026-01-08 00:44:37.542165 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:37.542173 | orchestrator | 2026-01-08 00:44:37.542184 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-08 00:44:42.557802 | orchestrator | Thursday 08 January 2026 00:44:37 +0000 (0:00:00.135) 0:00:46.200 ****** 2026-01-08 00:44:42.558820 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.558844 | orchestrator | 2026-01-08 00:44:42.558852 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-08 00:44:42.558859 | orchestrator | Thursday 08 January 2026 00:44:37 +0000 (0:00:00.360) 0:00:46.561 ****** 2026-01-08 00:44:42.558866 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.558872 | orchestrator | 2026-01-08 00:44:42.558879 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-08 00:44:42.558886 | orchestrator | Thursday 08 January 2026 00:44:38 +0000 (0:00:00.136) 0:00:46.698 ****** 2026-01-08 00:44:42.558892 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.558899 | orchestrator | 2026-01-08 00:44:42.558906 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-08 00:44:42.558913 | orchestrator | Thursday 08 January 2026 00:44:38 +0000 (0:00:00.160) 0:00:46.858 ****** 2026-01-08 00:44:42.558919 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.558925 | orchestrator | 2026-01-08 00:44:42.558931 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-08 00:44:42.558937 | orchestrator | Thursday 08 January 2026 00:44:38 +0000 (0:00:00.139) 0:00:46.998 ****** 2026-01-08 00:44:42.558943 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.558949 | orchestrator | 2026-01-08 00:44:42.558956 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-08 00:44:42.558963 | orchestrator | Thursday 08 January 2026 00:44:38 +0000 (0:00:00.144) 0:00:47.142 ****** 2026-01-08 00:44:42.558969 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.558976 | orchestrator | 2026-01-08 00:44:42.558982 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-08 00:44:42.558989 | orchestrator | Thursday 08 January 2026 00:44:38 +0000 (0:00:00.150) 0:00:47.293 ****** 2026-01-08 00:44:42.558996 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559003 | orchestrator | 2026-01-08 00:44:42.559009 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-08 00:44:42.559015 | orchestrator | Thursday 08 January 2026 00:44:38 +0000 (0:00:00.148) 0:00:47.442 ****** 2026-01-08 00:44:42.559022 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559029 | orchestrator | 2026-01-08 00:44:42.559035 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-08 00:44:42.559042 | orchestrator | Thursday 08 January 2026 00:44:38 +0000 (0:00:00.135) 0:00:47.577 ****** 2026-01-08 00:44:42.559049 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559056 | orchestrator | 2026-01-08 00:44:42.559063 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-08 00:44:42.559070 | orchestrator | Thursday 08 January 2026 00:44:39 +0000 (0:00:00.147) 0:00:47.724 ****** 2026-01-08 00:44:42.559076 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559082 | orchestrator | 2026-01-08 00:44:42.559090 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-08 00:44:42.559097 | orchestrator | Thursday 08 January 2026 00:44:39 +0000 (0:00:00.146) 0:00:47.871 ****** 2026-01-08 00:44:42.559105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:42.559120 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559127 | orchestrator | 2026-01-08 00:44:42.559133 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-08 00:44:42.559139 | orchestrator | Thursday 08 January 2026 00:44:39 +0000 (0:00:00.179) 0:00:48.051 ****** 2026-01-08 00:44:42.559146 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559161 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:42.559168 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559174 | orchestrator | 2026-01-08 00:44:42.559180 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-08 00:44:42.559188 | orchestrator | Thursday 08 January 2026 00:44:39 +0000 (0:00:00.182) 0:00:48.234 ****** 2026-01-08 00:44:42.559194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:42.559208 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559214 | orchestrator | 2026-01-08 00:44:42.559220 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-08 00:44:42.559228 | orchestrator | Thursday 08 January 2026 00:44:39 +0000 (0:00:00.178) 0:00:48.412 ****** 2026-01-08 00:44:42.559234 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559241 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:42.559248 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559254 | orchestrator | 2026-01-08 00:44:42.559277 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-08 00:44:42.559284 | orchestrator | Thursday 08 January 2026 00:44:40 +0000 (0:00:00.445) 0:00:48.858 ****** 2026-01-08 00:44:42.559290 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559297 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:42.559303 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559309 | orchestrator | 2026-01-08 00:44:42.559316 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-08 00:44:42.559322 | orchestrator | Thursday 08 January 2026 00:44:40 +0000 (0:00:00.161) 0:00:49.019 ****** 2026-01-08 00:44:42.559330 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:42.559342 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559348 | orchestrator | 2026-01-08 00:44:42.559354 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-08 00:44:42.559361 | orchestrator | Thursday 08 January 2026 00:44:40 +0000 (0:00:00.159) 0:00:49.179 ****** 2026-01-08 00:44:42.559411 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:42.559425 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559431 | orchestrator | 2026-01-08 00:44:42.559437 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-08 00:44:42.559444 | orchestrator | Thursday 08 January 2026 00:44:40 +0000 (0:00:00.172) 0:00:49.351 ****** 2026-01-08 00:44:42.559489 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:42.559507 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559514 | orchestrator | 2026-01-08 00:44:42.559520 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-08 00:44:42.559526 | orchestrator | Thursday 08 January 2026 00:44:40 +0000 (0:00:00.162) 0:00:49.513 ****** 2026-01-08 00:44:42.559533 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:44:42.559539 | orchestrator | 2026-01-08 00:44:42.559545 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-08 00:44:42.559552 | orchestrator | Thursday 08 January 2026 00:44:41 +0000 (0:00:00.514) 0:00:50.027 ****** 2026-01-08 00:44:42.559558 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:44:42.559564 | orchestrator | 2026-01-08 00:44:42.559570 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-08 00:44:42.559577 | orchestrator | Thursday 08 January 2026 00:44:41 +0000 (0:00:00.523) 0:00:50.551 ****** 2026-01-08 00:44:42.559583 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:44:42.559589 | orchestrator | 2026-01-08 00:44:42.559595 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-08 00:44:42.559601 | orchestrator | Thursday 08 January 2026 00:44:42 +0000 (0:00:00.159) 0:00:50.711 ****** 2026-01-08 00:44:42.559608 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'vg_name': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'}) 2026-01-08 00:44:42.559617 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'vg_name': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'}) 2026-01-08 00:44:42.559623 | orchestrator | 2026-01-08 00:44:42.559630 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-08 00:44:42.559637 | orchestrator | Thursday 08 January 2026 00:44:42 +0000 (0:00:00.169) 0:00:50.881 ****** 2026-01-08 00:44:42.559643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:42.559655 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:42.559662 | orchestrator | 2026-01-08 00:44:42.559668 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-08 00:44:42.559675 | orchestrator | Thursday 08 January 2026 00:44:42 +0000 (0:00:00.163) 0:00:51.044 ****** 2026-01-08 00:44:42.559681 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:42.559694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:48.843610 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:48.843759 | orchestrator | 2026-01-08 00:44:48.843776 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-08 00:44:48.843791 | orchestrator | Thursday 08 January 2026 00:44:42 +0000 (0:00:00.171) 0:00:51.216 ****** 2026-01-08 00:44:48.843803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'})  2026-01-08 00:44:48.843817 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'})  2026-01-08 00:44:48.843829 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:44:48.843868 | orchestrator | 2026-01-08 00:44:48.843880 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-08 00:44:48.843892 | orchestrator | Thursday 08 January 2026 00:44:42 +0000 (0:00:00.159) 0:00:51.376 ****** 2026-01-08 00:44:48.843904 | orchestrator | ok: [testbed-node-4] => { 2026-01-08 00:44:48.843915 | orchestrator |  "lvm_report": { 2026-01-08 00:44:48.843927 | orchestrator |  "lv": [ 2026-01-08 00:44:48.843938 | orchestrator |  { 2026-01-08 00:44:48.843949 | orchestrator |  "lv_name": "osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab", 2026-01-08 00:44:48.843962 | orchestrator |  "vg_name": "ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab" 2026-01-08 00:44:48.843973 | orchestrator |  }, 2026-01-08 00:44:48.843984 | orchestrator |  { 2026-01-08 00:44:48.843995 | orchestrator |  "lv_name": "osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe", 2026-01-08 00:44:48.844005 | orchestrator |  "vg_name": "ceph-738668c3-85d9-5999-8ba6-58353e2d69fe" 2026-01-08 00:44:48.844016 | orchestrator |  } 2026-01-08 00:44:48.844027 | orchestrator |  ], 2026-01-08 00:44:48.844038 | orchestrator |  "pv": [ 2026-01-08 00:44:48.844049 | orchestrator |  { 2026-01-08 00:44:48.844060 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-08 00:44:48.844073 | orchestrator |  "vg_name": "ceph-738668c3-85d9-5999-8ba6-58353e2d69fe" 2026-01-08 00:44:48.844086 | orchestrator |  }, 2026-01-08 00:44:48.844098 | orchestrator |  { 2026-01-08 00:44:48.844110 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-08 00:44:48.844122 | orchestrator |  "vg_name": "ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab" 2026-01-08 00:44:48.844134 | orchestrator |  } 2026-01-08 00:44:48.844146 | orchestrator |  ] 2026-01-08 00:44:48.844158 | orchestrator |  } 2026-01-08 00:44:48.844171 | orchestrator | } 2026-01-08 00:44:48.844183 | orchestrator | 2026-01-08 00:44:48.844196 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-08 00:44:48.844208 | orchestrator | 2026-01-08 00:44:48.844221 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-08 00:44:48.844253 | orchestrator | Thursday 08 January 2026 00:44:43 +0000 (0:00:00.507) 0:00:51.883 ****** 2026-01-08 00:44:48.844266 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-08 00:44:48.844279 | orchestrator | 2026-01-08 00:44:48.844293 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-08 00:44:48.844305 | orchestrator | Thursday 08 January 2026 00:44:43 +0000 (0:00:00.289) 0:00:52.172 ****** 2026-01-08 00:44:48.844318 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:44:48.844330 | orchestrator | 2026-01-08 00:44:48.844343 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844355 | orchestrator | Thursday 08 January 2026 00:44:43 +0000 (0:00:00.244) 0:00:52.417 ****** 2026-01-08 00:44:48.844368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-08 00:44:48.844380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-08 00:44:48.844393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-08 00:44:48.844405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-08 00:44:48.844418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-08 00:44:48.844431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-08 00:44:48.844442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-08 00:44:48.844473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-08 00:44:48.844484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-08 00:44:48.844504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-08 00:44:48.844515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-08 00:44:48.844526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-08 00:44:48.844537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-08 00:44:48.844547 | orchestrator | 2026-01-08 00:44:48.844564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844575 | orchestrator | Thursday 08 January 2026 00:44:44 +0000 (0:00:00.420) 0:00:52.838 ****** 2026-01-08 00:44:48.844586 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:48.844597 | orchestrator | 2026-01-08 00:44:48.844608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844619 | orchestrator | Thursday 08 January 2026 00:44:44 +0000 (0:00:00.228) 0:00:53.067 ****** 2026-01-08 00:44:48.844630 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:48.844641 | orchestrator | 2026-01-08 00:44:48.844652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844683 | orchestrator | Thursday 08 January 2026 00:44:44 +0000 (0:00:00.229) 0:00:53.296 ****** 2026-01-08 00:44:48.844694 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:48.844705 | orchestrator | 2026-01-08 00:44:48.844716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844727 | orchestrator | Thursday 08 January 2026 00:44:44 +0000 (0:00:00.193) 0:00:53.489 ****** 2026-01-08 00:44:48.844738 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:48.844749 | orchestrator | 2026-01-08 00:44:48.844760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844771 | orchestrator | Thursday 08 January 2026 00:44:45 +0000 (0:00:00.190) 0:00:53.680 ****** 2026-01-08 00:44:48.844782 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:48.844792 | orchestrator | 2026-01-08 00:44:48.844803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844814 | orchestrator | Thursday 08 January 2026 00:44:45 +0000 (0:00:00.197) 0:00:53.878 ****** 2026-01-08 00:44:48.844825 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:48.844836 | orchestrator | 2026-01-08 00:44:48.844847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844858 | orchestrator | Thursday 08 January 2026 00:44:45 +0000 (0:00:00.676) 0:00:54.555 ****** 2026-01-08 00:44:48.844869 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:48.844880 | orchestrator | 2026-01-08 00:44:48.844891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844902 | orchestrator | Thursday 08 January 2026 00:44:46 +0000 (0:00:00.213) 0:00:54.768 ****** 2026-01-08 00:44:48.844912 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:48.844923 | orchestrator | 2026-01-08 00:44:48.844934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.844946 | orchestrator | Thursday 08 January 2026 00:44:46 +0000 (0:00:00.216) 0:00:54.985 ****** 2026-01-08 00:44:48.844957 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa) 2026-01-08 00:44:48.844969 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa) 2026-01-08 00:44:48.844980 | orchestrator | 2026-01-08 00:44:48.844991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.845002 | orchestrator | Thursday 08 January 2026 00:44:46 +0000 (0:00:00.435) 0:00:55.421 ****** 2026-01-08 00:44:48.845013 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490) 2026-01-08 00:44:48.845023 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490) 2026-01-08 00:44:48.845034 | orchestrator | 2026-01-08 00:44:48.845053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.845070 | orchestrator | Thursday 08 January 2026 00:44:47 +0000 (0:00:00.425) 0:00:55.846 ****** 2026-01-08 00:44:48.845081 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0) 2026-01-08 00:44:48.845092 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0) 2026-01-08 00:44:48.845103 | orchestrator | 2026-01-08 00:44:48.845114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.845125 | orchestrator | Thursday 08 January 2026 00:44:47 +0000 (0:00:00.444) 0:00:56.290 ****** 2026-01-08 00:44:48.845136 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42) 2026-01-08 00:44:48.845147 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42) 2026-01-08 00:44:48.845158 | orchestrator | 2026-01-08 00:44:48.845169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-08 00:44:48.845180 | orchestrator | Thursday 08 January 2026 00:44:48 +0000 (0:00:00.422) 0:00:56.713 ****** 2026-01-08 00:44:48.845191 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-08 00:44:48.845202 | orchestrator | 2026-01-08 00:44:48.845212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:48.845223 | orchestrator | Thursday 08 January 2026 00:44:48 +0000 (0:00:00.337) 0:00:57.050 ****** 2026-01-08 00:44:48.845234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-08 00:44:48.845245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-08 00:44:48.845256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-08 00:44:48.845267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-08 00:44:48.845277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-08 00:44:48.845288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-08 00:44:48.845299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-08 00:44:48.845310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-08 00:44:48.845321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-08 00:44:48.845332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-08 00:44:48.845343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-08 00:44:48.845361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-08 00:44:58.256204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-08 00:44:58.256320 | orchestrator | 2026-01-08 00:44:58.256338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256352 | orchestrator | Thursday 08 January 2026 00:44:48 +0000 (0:00:00.442) 0:00:57.493 ****** 2026-01-08 00:44:58.256364 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256377 | orchestrator | 2026-01-08 00:44:58.256388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256400 | orchestrator | Thursday 08 January 2026 00:44:49 +0000 (0:00:00.212) 0:00:57.705 ****** 2026-01-08 00:44:58.256411 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256422 | orchestrator | 2026-01-08 00:44:58.256433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256498 | orchestrator | Thursday 08 January 2026 00:44:49 +0000 (0:00:00.785) 0:00:58.491 ****** 2026-01-08 00:44:58.256537 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256548 | orchestrator | 2026-01-08 00:44:58.256559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256570 | orchestrator | Thursday 08 January 2026 00:44:50 +0000 (0:00:00.245) 0:00:58.736 ****** 2026-01-08 00:44:58.256581 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256592 | orchestrator | 2026-01-08 00:44:58.256603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256614 | orchestrator | Thursday 08 January 2026 00:44:50 +0000 (0:00:00.203) 0:00:58.940 ****** 2026-01-08 00:44:58.256625 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256635 | orchestrator | 2026-01-08 00:44:58.256646 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256657 | orchestrator | Thursday 08 January 2026 00:44:50 +0000 (0:00:00.206) 0:00:59.147 ****** 2026-01-08 00:44:58.256668 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256679 | orchestrator | 2026-01-08 00:44:58.256690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256701 | orchestrator | Thursday 08 January 2026 00:44:50 +0000 (0:00:00.212) 0:00:59.359 ****** 2026-01-08 00:44:58.256712 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256723 | orchestrator | 2026-01-08 00:44:58.256734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256744 | orchestrator | Thursday 08 January 2026 00:44:50 +0000 (0:00:00.206) 0:00:59.565 ****** 2026-01-08 00:44:58.256755 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256770 | orchestrator | 2026-01-08 00:44:58.256788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256807 | orchestrator | Thursday 08 January 2026 00:44:51 +0000 (0:00:00.216) 0:00:59.782 ****** 2026-01-08 00:44:58.256824 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-08 00:44:58.256842 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-08 00:44:58.256862 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-08 00:44:58.256881 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-08 00:44:58.256901 | orchestrator | 2026-01-08 00:44:58.256918 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256932 | orchestrator | Thursday 08 January 2026 00:44:51 +0000 (0:00:00.639) 0:01:00.422 ****** 2026-01-08 00:44:58.256943 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256954 | orchestrator | 2026-01-08 00:44:58.256966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.256977 | orchestrator | Thursday 08 January 2026 00:44:51 +0000 (0:00:00.213) 0:01:00.636 ****** 2026-01-08 00:44:58.256988 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.256999 | orchestrator | 2026-01-08 00:44:58.257010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.257021 | orchestrator | Thursday 08 January 2026 00:44:52 +0000 (0:00:00.260) 0:01:00.896 ****** 2026-01-08 00:44:58.257032 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257043 | orchestrator | 2026-01-08 00:44:58.257054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-08 00:44:58.257065 | orchestrator | Thursday 08 January 2026 00:44:52 +0000 (0:00:00.213) 0:01:01.109 ****** 2026-01-08 00:44:58.257075 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257087 | orchestrator | 2026-01-08 00:44:58.257097 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-08 00:44:58.257108 | orchestrator | Thursday 08 January 2026 00:44:52 +0000 (0:00:00.209) 0:01:01.319 ****** 2026-01-08 00:44:58.257119 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257130 | orchestrator | 2026-01-08 00:44:58.257141 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-08 00:44:58.257152 | orchestrator | Thursday 08 January 2026 00:44:53 +0000 (0:00:00.346) 0:01:01.665 ****** 2026-01-08 00:44:58.257163 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e7c35fc3-220b-5a3c-9d36-601219d17f28'}}) 2026-01-08 00:44:58.257183 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1538380d-5182-5482-9616-e6fa16e7f592'}}) 2026-01-08 00:44:58.257195 | orchestrator | 2026-01-08 00:44:58.257205 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-08 00:44:58.257216 | orchestrator | Thursday 08 January 2026 00:44:53 +0000 (0:00:00.196) 0:01:01.862 ****** 2026-01-08 00:44:58.257230 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'}) 2026-01-08 00:44:58.257260 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'}) 2026-01-08 00:44:58.257272 | orchestrator | 2026-01-08 00:44:58.257283 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-08 00:44:58.257312 | orchestrator | Thursday 08 January 2026 00:44:55 +0000 (0:00:01.864) 0:01:03.726 ****** 2026-01-08 00:44:58.257324 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:44:58.257337 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:44:58.257347 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257358 | orchestrator | 2026-01-08 00:44:58.257369 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-08 00:44:58.257381 | orchestrator | Thursday 08 January 2026 00:44:55 +0000 (0:00:00.166) 0:01:03.893 ****** 2026-01-08 00:44:58.257392 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'}) 2026-01-08 00:44:58.257403 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'}) 2026-01-08 00:44:58.257414 | orchestrator | 2026-01-08 00:44:58.257425 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-08 00:44:58.257436 | orchestrator | Thursday 08 January 2026 00:44:56 +0000 (0:00:01.260) 0:01:05.154 ****** 2026-01-08 00:44:58.257506 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:44:58.257518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:44:58.257529 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257540 | orchestrator | 2026-01-08 00:44:58.257551 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-08 00:44:58.257562 | orchestrator | Thursday 08 January 2026 00:44:56 +0000 (0:00:00.172) 0:01:05.326 ****** 2026-01-08 00:44:58.257573 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257584 | orchestrator | 2026-01-08 00:44:58.257595 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-08 00:44:58.257606 | orchestrator | Thursday 08 January 2026 00:44:56 +0000 (0:00:00.145) 0:01:05.472 ****** 2026-01-08 00:44:58.257623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:44:58.257635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:44:58.257646 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257657 | orchestrator | 2026-01-08 00:44:58.257668 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-08 00:44:58.257687 | orchestrator | Thursday 08 January 2026 00:44:56 +0000 (0:00:00.164) 0:01:05.637 ****** 2026-01-08 00:44:58.257698 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257709 | orchestrator | 2026-01-08 00:44:58.257720 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-08 00:44:58.257731 | orchestrator | Thursday 08 January 2026 00:44:57 +0000 (0:00:00.150) 0:01:05.788 ****** 2026-01-08 00:44:58.257742 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:44:58.257753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:44:58.257764 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257775 | orchestrator | 2026-01-08 00:44:58.257786 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-08 00:44:58.257797 | orchestrator | Thursday 08 January 2026 00:44:57 +0000 (0:00:00.203) 0:01:05.992 ****** 2026-01-08 00:44:58.257808 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257818 | orchestrator | 2026-01-08 00:44:58.257829 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-08 00:44:58.257840 | orchestrator | Thursday 08 January 2026 00:44:57 +0000 (0:00:00.147) 0:01:06.140 ****** 2026-01-08 00:44:58.257851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:44:58.257862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:44:58.257873 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:44:58.257884 | orchestrator | 2026-01-08 00:44:58.257895 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-08 00:44:58.257906 | orchestrator | Thursday 08 January 2026 00:44:57 +0000 (0:00:00.170) 0:01:06.310 ****** 2026-01-08 00:44:58.257917 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:44:58.257928 | orchestrator | 2026-01-08 00:44:58.257939 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-08 00:44:58.257950 | orchestrator | Thursday 08 January 2026 00:44:58 +0000 (0:00:00.400) 0:01:06.711 ****** 2026-01-08 00:44:58.257969 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:04.704196 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:04.705300 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.705357 | orchestrator | 2026-01-08 00:45:04.705370 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-08 00:45:04.705380 | orchestrator | Thursday 08 January 2026 00:44:58 +0000 (0:00:00.204) 0:01:06.915 ****** 2026-01-08 00:45:04.705389 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:04.705398 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:04.705406 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.705414 | orchestrator | 2026-01-08 00:45:04.705422 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-08 00:45:04.705429 | orchestrator | Thursday 08 January 2026 00:44:58 +0000 (0:00:00.209) 0:01:07.125 ****** 2026-01-08 00:45:04.705555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:04.705588 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:04.705619 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.705627 | orchestrator | 2026-01-08 00:45:04.705635 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-08 00:45:04.705642 | orchestrator | Thursday 08 January 2026 00:44:58 +0000 (0:00:00.181) 0:01:07.306 ****** 2026-01-08 00:45:04.705650 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.705657 | orchestrator | 2026-01-08 00:45:04.705664 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-08 00:45:04.705672 | orchestrator | Thursday 08 January 2026 00:44:58 +0000 (0:00:00.140) 0:01:07.447 ****** 2026-01-08 00:45:04.705679 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.705686 | orchestrator | 2026-01-08 00:45:04.705694 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-08 00:45:04.705701 | orchestrator | Thursday 08 January 2026 00:44:58 +0000 (0:00:00.144) 0:01:07.591 ****** 2026-01-08 00:45:04.705709 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.705716 | orchestrator | 2026-01-08 00:45:04.705735 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-08 00:45:04.705743 | orchestrator | Thursday 08 January 2026 00:44:59 +0000 (0:00:00.153) 0:01:07.744 ****** 2026-01-08 00:45:04.705750 | orchestrator | ok: [testbed-node-5] => { 2026-01-08 00:45:04.705758 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-08 00:45:04.705766 | orchestrator | } 2026-01-08 00:45:04.705773 | orchestrator | 2026-01-08 00:45:04.705781 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-08 00:45:04.705788 | orchestrator | Thursday 08 January 2026 00:44:59 +0000 (0:00:00.133) 0:01:07.878 ****** 2026-01-08 00:45:04.705796 | orchestrator | ok: [testbed-node-5] => { 2026-01-08 00:45:04.705803 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-08 00:45:04.705811 | orchestrator | } 2026-01-08 00:45:04.705818 | orchestrator | 2026-01-08 00:45:04.705826 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-08 00:45:04.705833 | orchestrator | Thursday 08 January 2026 00:44:59 +0000 (0:00:00.173) 0:01:08.052 ****** 2026-01-08 00:45:04.705840 | orchestrator | ok: [testbed-node-5] => { 2026-01-08 00:45:04.705848 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-08 00:45:04.705857 | orchestrator | } 2026-01-08 00:45:04.705869 | orchestrator | 2026-01-08 00:45:04.705886 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-08 00:45:04.705900 | orchestrator | Thursday 08 January 2026 00:44:59 +0000 (0:00:00.182) 0:01:08.235 ****** 2026-01-08 00:45:04.705912 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:45:04.705923 | orchestrator | 2026-01-08 00:45:04.705936 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-08 00:45:04.705948 | orchestrator | Thursday 08 January 2026 00:45:00 +0000 (0:00:00.543) 0:01:08.779 ****** 2026-01-08 00:45:04.705960 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:45:04.705972 | orchestrator | 2026-01-08 00:45:04.705985 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-08 00:45:04.705993 | orchestrator | Thursday 08 January 2026 00:45:00 +0000 (0:00:00.593) 0:01:09.372 ****** 2026-01-08 00:45:04.706000 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:45:04.706007 | orchestrator | 2026-01-08 00:45:04.706062 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-08 00:45:04.706072 | orchestrator | Thursday 08 January 2026 00:45:01 +0000 (0:00:00.758) 0:01:10.131 ****** 2026-01-08 00:45:04.706082 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:45:04.706095 | orchestrator | 2026-01-08 00:45:04.706106 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-08 00:45:04.706118 | orchestrator | Thursday 08 January 2026 00:45:01 +0000 (0:00:00.145) 0:01:10.276 ****** 2026-01-08 00:45:04.706129 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706141 | orchestrator | 2026-01-08 00:45:04.706152 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-08 00:45:04.706176 | orchestrator | Thursday 08 January 2026 00:45:01 +0000 (0:00:00.114) 0:01:10.390 ****** 2026-01-08 00:45:04.706189 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706202 | orchestrator | 2026-01-08 00:45:04.706215 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-08 00:45:04.706228 | orchestrator | Thursday 08 January 2026 00:45:01 +0000 (0:00:00.119) 0:01:10.510 ****** 2026-01-08 00:45:04.706251 | orchestrator | ok: [testbed-node-5] => { 2026-01-08 00:45:04.706259 | orchestrator |  "vgs_report": { 2026-01-08 00:45:04.706266 | orchestrator |  "vg": [] 2026-01-08 00:45:04.706294 | orchestrator |  } 2026-01-08 00:45:04.706302 | orchestrator | } 2026-01-08 00:45:04.706310 | orchestrator | 2026-01-08 00:45:04.706317 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-08 00:45:04.706324 | orchestrator | Thursday 08 January 2026 00:45:02 +0000 (0:00:00.155) 0:01:10.665 ****** 2026-01-08 00:45:04.706332 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706339 | orchestrator | 2026-01-08 00:45:04.706346 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-08 00:45:04.706353 | orchestrator | Thursday 08 January 2026 00:45:02 +0000 (0:00:00.148) 0:01:10.813 ****** 2026-01-08 00:45:04.706361 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706368 | orchestrator | 2026-01-08 00:45:04.706427 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-08 00:45:04.706435 | orchestrator | Thursday 08 January 2026 00:45:02 +0000 (0:00:00.127) 0:01:10.941 ****** 2026-01-08 00:45:04.706469 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706478 | orchestrator | 2026-01-08 00:45:04.706486 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-08 00:45:04.706493 | orchestrator | Thursday 08 January 2026 00:45:02 +0000 (0:00:00.139) 0:01:11.080 ****** 2026-01-08 00:45:04.706503 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706515 | orchestrator | 2026-01-08 00:45:04.706525 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-08 00:45:04.706541 | orchestrator | Thursday 08 January 2026 00:45:02 +0000 (0:00:00.152) 0:01:11.232 ****** 2026-01-08 00:45:04.706558 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706568 | orchestrator | 2026-01-08 00:45:04.706579 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-08 00:45:04.706591 | orchestrator | Thursday 08 January 2026 00:45:02 +0000 (0:00:00.152) 0:01:11.385 ****** 2026-01-08 00:45:04.706603 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706615 | orchestrator | 2026-01-08 00:45:04.706626 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-08 00:45:04.706637 | orchestrator | Thursday 08 January 2026 00:45:02 +0000 (0:00:00.133) 0:01:11.518 ****** 2026-01-08 00:45:04.706650 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706663 | orchestrator | 2026-01-08 00:45:04.706676 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-08 00:45:04.706687 | orchestrator | Thursday 08 January 2026 00:45:02 +0000 (0:00:00.143) 0:01:11.662 ****** 2026-01-08 00:45:04.706746 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706762 | orchestrator | 2026-01-08 00:45:04.706776 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-08 00:45:04.706788 | orchestrator | Thursday 08 January 2026 00:45:03 +0000 (0:00:00.346) 0:01:12.008 ****** 2026-01-08 00:45:04.706801 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706812 | orchestrator | 2026-01-08 00:45:04.706833 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-08 00:45:04.706846 | orchestrator | Thursday 08 January 2026 00:45:03 +0000 (0:00:00.142) 0:01:12.151 ****** 2026-01-08 00:45:04.706857 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706869 | orchestrator | 2026-01-08 00:45:04.706879 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-08 00:45:04.706896 | orchestrator | Thursday 08 January 2026 00:45:03 +0000 (0:00:00.142) 0:01:12.293 ****** 2026-01-08 00:45:04.706903 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706910 | orchestrator | 2026-01-08 00:45:04.706918 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-08 00:45:04.706925 | orchestrator | Thursday 08 January 2026 00:45:03 +0000 (0:00:00.155) 0:01:12.448 ****** 2026-01-08 00:45:04.706933 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706940 | orchestrator | 2026-01-08 00:45:04.706947 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-08 00:45:04.706955 | orchestrator | Thursday 08 January 2026 00:45:03 +0000 (0:00:00.142) 0:01:12.591 ****** 2026-01-08 00:45:04.706962 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706969 | orchestrator | 2026-01-08 00:45:04.706976 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-08 00:45:04.706984 | orchestrator | Thursday 08 January 2026 00:45:04 +0000 (0:00:00.136) 0:01:12.728 ****** 2026-01-08 00:45:04.706991 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.706998 | orchestrator | 2026-01-08 00:45:04.707005 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-08 00:45:04.707013 | orchestrator | Thursday 08 January 2026 00:45:04 +0000 (0:00:00.145) 0:01:12.873 ****** 2026-01-08 00:45:04.707020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:04.707028 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:04.707036 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.707043 | orchestrator | 2026-01-08 00:45:04.707050 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-08 00:45:04.707057 | orchestrator | Thursday 08 January 2026 00:45:04 +0000 (0:00:00.169) 0:01:13.043 ****** 2026-01-08 00:45:04.707065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:04.707072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:04.707079 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:04.707087 | orchestrator | 2026-01-08 00:45:04.707094 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-08 00:45:04.707101 | orchestrator | Thursday 08 January 2026 00:45:04 +0000 (0:00:00.159) 0:01:13.203 ****** 2026-01-08 00:45:04.707120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:07.735796 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:07.735915 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:07.735944 | orchestrator | 2026-01-08 00:45:07.735958 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-08 00:45:07.735972 | orchestrator | Thursday 08 January 2026 00:45:04 +0000 (0:00:00.161) 0:01:13.365 ****** 2026-01-08 00:45:07.735984 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:07.735996 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:07.736007 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:07.736019 | orchestrator | 2026-01-08 00:45:07.736030 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-08 00:45:07.736067 | orchestrator | Thursday 08 January 2026 00:45:04 +0000 (0:00:00.167) 0:01:13.533 ****** 2026-01-08 00:45:07.736079 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:07.736091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:07.736102 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:07.736112 | orchestrator | 2026-01-08 00:45:07.736124 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-08 00:45:07.736135 | orchestrator | Thursday 08 January 2026 00:45:05 +0000 (0:00:00.165) 0:01:13.698 ****** 2026-01-08 00:45:07.736146 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:07.736157 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:07.736168 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:07.736179 | orchestrator | 2026-01-08 00:45:07.736190 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-08 00:45:07.736202 | orchestrator | Thursday 08 January 2026 00:45:05 +0000 (0:00:00.381) 0:01:14.080 ****** 2026-01-08 00:45:07.736218 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:07.736235 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:07.736255 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:07.736272 | orchestrator | 2026-01-08 00:45:07.736290 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-08 00:45:07.736309 | orchestrator | Thursday 08 January 2026 00:45:05 +0000 (0:00:00.171) 0:01:14.251 ****** 2026-01-08 00:45:07.736328 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:07.736347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:07.736365 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:07.736383 | orchestrator | 2026-01-08 00:45:07.736400 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-08 00:45:07.736418 | orchestrator | Thursday 08 January 2026 00:45:05 +0000 (0:00:00.144) 0:01:14.396 ****** 2026-01-08 00:45:07.736489 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:45:07.736512 | orchestrator | 2026-01-08 00:45:07.736531 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-08 00:45:07.736549 | orchestrator | Thursday 08 January 2026 00:45:06 +0000 (0:00:00.522) 0:01:14.919 ****** 2026-01-08 00:45:07.736569 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:45:07.736588 | orchestrator | 2026-01-08 00:45:07.736607 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-08 00:45:07.736627 | orchestrator | Thursday 08 January 2026 00:45:06 +0000 (0:00:00.512) 0:01:15.431 ****** 2026-01-08 00:45:07.736645 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:45:07.736663 | orchestrator | 2026-01-08 00:45:07.736681 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-08 00:45:07.736701 | orchestrator | Thursday 08 January 2026 00:45:06 +0000 (0:00:00.151) 0:01:15.583 ****** 2026-01-08 00:45:07.736719 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'vg_name': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'}) 2026-01-08 00:45:07.736738 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'vg_name': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'}) 2026-01-08 00:45:07.736763 | orchestrator | 2026-01-08 00:45:07.736774 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-08 00:45:07.736786 | orchestrator | Thursday 08 January 2026 00:45:07 +0000 (0:00:00.171) 0:01:15.755 ****** 2026-01-08 00:45:07.736837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:07.736851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:07.736862 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:07.736873 | orchestrator | 2026-01-08 00:45:07.736884 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-08 00:45:07.736896 | orchestrator | Thursday 08 January 2026 00:45:07 +0000 (0:00:00.150) 0:01:15.906 ****** 2026-01-08 00:45:07.736908 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:07.736919 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:07.736930 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:07.736941 | orchestrator | 2026-01-08 00:45:07.736952 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-08 00:45:07.736963 | orchestrator | Thursday 08 January 2026 00:45:07 +0000 (0:00:00.168) 0:01:16.075 ****** 2026-01-08 00:45:07.736974 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'})  2026-01-08 00:45:07.736985 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'})  2026-01-08 00:45:07.736996 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:07.737007 | orchestrator | 2026-01-08 00:45:07.737018 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-08 00:45:07.737029 | orchestrator | Thursday 08 January 2026 00:45:07 +0000 (0:00:00.143) 0:01:16.219 ****** 2026-01-08 00:45:07.737040 | orchestrator | ok: [testbed-node-5] => { 2026-01-08 00:45:07.737051 | orchestrator |  "lvm_report": { 2026-01-08 00:45:07.737062 | orchestrator |  "lv": [ 2026-01-08 00:45:07.737073 | orchestrator |  { 2026-01-08 00:45:07.737089 | orchestrator |  "lv_name": "osd-block-1538380d-5182-5482-9616-e6fa16e7f592", 2026-01-08 00:45:07.737101 | orchestrator |  "vg_name": "ceph-1538380d-5182-5482-9616-e6fa16e7f592" 2026-01-08 00:45:07.737112 | orchestrator |  }, 2026-01-08 00:45:07.737123 | orchestrator |  { 2026-01-08 00:45:07.737134 | orchestrator |  "lv_name": "osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28", 2026-01-08 00:45:07.737145 | orchestrator |  "vg_name": "ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28" 2026-01-08 00:45:07.737155 | orchestrator |  } 2026-01-08 00:45:07.737166 | orchestrator |  ], 2026-01-08 00:45:07.737177 | orchestrator |  "pv": [ 2026-01-08 00:45:07.737188 | orchestrator |  { 2026-01-08 00:45:07.737199 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-08 00:45:07.737210 | orchestrator |  "vg_name": "ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28" 2026-01-08 00:45:07.737221 | orchestrator |  }, 2026-01-08 00:45:07.737232 | orchestrator |  { 2026-01-08 00:45:07.737243 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-08 00:45:07.737254 | orchestrator |  "vg_name": "ceph-1538380d-5182-5482-9616-e6fa16e7f592" 2026-01-08 00:45:07.737265 | orchestrator |  } 2026-01-08 00:45:07.737276 | orchestrator |  ] 2026-01-08 00:45:07.737294 | orchestrator |  } 2026-01-08 00:45:07.737305 | orchestrator | } 2026-01-08 00:45:07.737316 | orchestrator | 2026-01-08 00:45:07.737328 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:45:07.737339 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-08 00:45:07.737350 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-08 00:45:07.737362 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-08 00:45:07.737373 | orchestrator | 2026-01-08 00:45:07.737384 | orchestrator | 2026-01-08 00:45:07.737395 | orchestrator | 2026-01-08 00:45:07.737405 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:45:07.737416 | orchestrator | Thursday 08 January 2026 00:45:07 +0000 (0:00:00.153) 0:01:16.372 ****** 2026-01-08 00:45:07.737427 | orchestrator | =============================================================================== 2026-01-08 00:45:07.737473 | orchestrator | Create block VGs -------------------------------------------------------- 6.78s 2026-01-08 00:45:07.737490 | orchestrator | Create block LVs -------------------------------------------------------- 4.05s 2026-01-08 00:45:07.737501 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2026-01-08 00:45:07.737512 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.82s 2026-01-08 00:45:07.737523 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.68s 2026-01-08 00:45:07.737534 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.63s 2026-01-08 00:45:07.737545 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2026-01-08 00:45:07.737556 | orchestrator | Add known partitions to the list of available block devices ------------- 1.53s 2026-01-08 00:45:07.737575 | orchestrator | Add known links to the list of available block devices ------------------ 1.39s 2026-01-08 00:45:08.147103 | orchestrator | Print LVM report data --------------------------------------------------- 0.96s 2026-01-08 00:45:08.147317 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-01-08 00:45:08.147342 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-01-08 00:45:08.147356 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2026-01-08 00:45:08.147370 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.80s 2026-01-08 00:45:08.147397 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-01-08 00:45:08.147411 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.78s 2026-01-08 00:45:08.147424 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.78s 2026-01-08 00:45:08.147463 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-01-08 00:45:08.147478 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2026-01-08 00:45:08.147492 | orchestrator | Print size needed for LVs on ceph_db_devices ---------------------------- 0.73s 2026-01-08 00:45:20.628650 | orchestrator | 2026-01-08 00:45:20 | INFO  | Task 64c4de7a-f8cf-406f-8441-377492b69a0c (facts) was prepared for execution. 2026-01-08 00:45:20.628778 | orchestrator | 2026-01-08 00:45:20 | INFO  | It takes a moment until task 64c4de7a-f8cf-406f-8441-377492b69a0c (facts) has been started and output is visible here. 2026-01-08 00:45:33.299329 | orchestrator | 2026-01-08 00:45:33.299446 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-08 00:45:33.299459 | orchestrator | 2026-01-08 00:45:33.299466 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-08 00:45:33.299472 | orchestrator | Thursday 08 January 2026 00:45:25 +0000 (0:00:00.295) 0:00:00.295 ****** 2026-01-08 00:45:33.299503 | orchestrator | ok: [testbed-manager] 2026-01-08 00:45:33.299511 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:45:33.299517 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:45:33.299523 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:45:33.299529 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:45:33.299535 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:45:33.299541 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:45:33.299546 | orchestrator | 2026-01-08 00:45:33.299566 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-08 00:45:33.299573 | orchestrator | Thursday 08 January 2026 00:45:26 +0000 (0:00:01.135) 0:00:01.430 ****** 2026-01-08 00:45:33.299581 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:45:33.299587 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:45:33.299593 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:45:33.299600 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:45:33.299606 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:45:33.299612 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:45:33.299618 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:33.299623 | orchestrator | 2026-01-08 00:45:33.299629 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-08 00:45:33.299635 | orchestrator | 2026-01-08 00:45:33.299642 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-08 00:45:33.299648 | orchestrator | Thursday 08 January 2026 00:45:27 +0000 (0:00:01.247) 0:00:02.677 ****** 2026-01-08 00:45:33.299654 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:45:33.299661 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:45:33.299668 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:45:33.299674 | orchestrator | ok: [testbed-manager] 2026-01-08 00:45:33.299681 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:45:33.299687 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:45:33.299693 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:45:33.299698 | orchestrator | 2026-01-08 00:45:33.299704 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-08 00:45:33.299710 | orchestrator | 2026-01-08 00:45:33.299717 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-08 00:45:33.299724 | orchestrator | Thursday 08 January 2026 00:45:32 +0000 (0:00:04.896) 0:00:07.574 ****** 2026-01-08 00:45:33.299730 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:45:33.299736 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:45:33.299741 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:45:33.299747 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:45:33.299753 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:45:33.299760 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:45:33.299767 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:45:33.299773 | orchestrator | 2026-01-08 00:45:33.299779 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:45:33.299785 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:45:33.299792 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:45:33.299797 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:45:33.299805 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:45:33.299811 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:45:33.299816 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:45:33.299830 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:45:33.299836 | orchestrator | 2026-01-08 00:45:33.299842 | orchestrator | 2026-01-08 00:45:33.299848 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:45:33.299854 | orchestrator | Thursday 08 January 2026 00:45:32 +0000 (0:00:00.554) 0:00:08.128 ****** 2026-01-08 00:45:33.299861 | orchestrator | =============================================================================== 2026-01-08 00:45:33.299867 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.90s 2026-01-08 00:45:33.299872 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2026-01-08 00:45:33.299879 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-01-08 00:45:33.299885 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-01-08 00:45:45.916470 | orchestrator | 2026-01-08 00:45:45 | INFO  | Task 93acb623-d389-4ee8-8bc1-0529b4390709 (frr) was prepared for execution. 2026-01-08 00:45:45.916555 | orchestrator | 2026-01-08 00:45:45 | INFO  | It takes a moment until task 93acb623-d389-4ee8-8bc1-0529b4390709 (frr) has been started and output is visible here. 2026-01-08 00:46:12.747768 | orchestrator | 2026-01-08 00:46:12.747869 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-08 00:46:12.747884 | orchestrator | 2026-01-08 00:46:12.747891 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-08 00:46:12.747899 | orchestrator | Thursday 08 January 2026 00:45:50 +0000 (0:00:00.235) 0:00:00.236 ****** 2026-01-08 00:46:12.747905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-08 00:46:12.747913 | orchestrator | 2026-01-08 00:46:12.747920 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-08 00:46:12.747926 | orchestrator | Thursday 08 January 2026 00:45:50 +0000 (0:00:00.218) 0:00:00.454 ****** 2026-01-08 00:46:12.747932 | orchestrator | changed: [testbed-manager] 2026-01-08 00:46:12.747940 | orchestrator | 2026-01-08 00:46:12.747946 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-08 00:46:12.747970 | orchestrator | Thursday 08 January 2026 00:45:51 +0000 (0:00:01.199) 0:00:01.654 ****** 2026-01-08 00:46:12.747977 | orchestrator | changed: [testbed-manager] 2026-01-08 00:46:12.747983 | orchestrator | 2026-01-08 00:46:12.747990 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-08 00:46:12.747996 | orchestrator | Thursday 08 January 2026 00:46:02 +0000 (0:00:10.775) 0:00:12.429 ****** 2026-01-08 00:46:12.748003 | orchestrator | ok: [testbed-manager] 2026-01-08 00:46:12.748011 | orchestrator | 2026-01-08 00:46:12.748017 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-08 00:46:12.748023 | orchestrator | Thursday 08 January 2026 00:46:03 +0000 (0:00:01.077) 0:00:13.506 ****** 2026-01-08 00:46:12.748029 | orchestrator | changed: [testbed-manager] 2026-01-08 00:46:12.748035 | orchestrator | 2026-01-08 00:46:12.748042 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-08 00:46:12.748047 | orchestrator | Thursday 08 January 2026 00:46:04 +0000 (0:00:01.030) 0:00:14.537 ****** 2026-01-08 00:46:12.748054 | orchestrator | ok: [testbed-manager] 2026-01-08 00:46:12.748060 | orchestrator | 2026-01-08 00:46:12.748067 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-08 00:46:12.748074 | orchestrator | Thursday 08 January 2026 00:46:05 +0000 (0:00:01.185) 0:00:15.723 ****** 2026-01-08 00:46:12.748080 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:46:12.748086 | orchestrator | 2026-01-08 00:46:12.748092 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-08 00:46:12.748099 | orchestrator | Thursday 08 January 2026 00:46:05 +0000 (0:00:00.155) 0:00:15.879 ****** 2026-01-08 00:46:12.748127 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:46:12.748133 | orchestrator | 2026-01-08 00:46:12.748139 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-08 00:46:12.748146 | orchestrator | Thursday 08 January 2026 00:46:06 +0000 (0:00:00.177) 0:00:16.056 ****** 2026-01-08 00:46:12.748153 | orchestrator | changed: [testbed-manager] 2026-01-08 00:46:12.748158 | orchestrator | 2026-01-08 00:46:12.748164 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-08 00:46:12.748170 | orchestrator | Thursday 08 January 2026 00:46:07 +0000 (0:00:01.010) 0:00:17.066 ****** 2026-01-08 00:46:12.748176 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-08 00:46:12.748182 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-08 00:46:12.748191 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-08 00:46:12.748198 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-08 00:46:12.748206 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-08 00:46:12.748214 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-08 00:46:12.748221 | orchestrator | 2026-01-08 00:46:12.748229 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-08 00:46:12.748236 | orchestrator | Thursday 08 January 2026 00:46:09 +0000 (0:00:02.264) 0:00:19.330 ****** 2026-01-08 00:46:12.748243 | orchestrator | ok: [testbed-manager] 2026-01-08 00:46:12.748250 | orchestrator | 2026-01-08 00:46:12.748258 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-08 00:46:12.748265 | orchestrator | Thursday 08 January 2026 00:46:11 +0000 (0:00:01.662) 0:00:20.993 ****** 2026-01-08 00:46:12.748272 | orchestrator | changed: [testbed-manager] 2026-01-08 00:46:12.748279 | orchestrator | 2026-01-08 00:46:12.748287 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:46:12.748295 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 00:46:12.748302 | orchestrator | 2026-01-08 00:46:12.748310 | orchestrator | 2026-01-08 00:46:12.748317 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:46:12.748324 | orchestrator | Thursday 08 January 2026 00:46:12 +0000 (0:00:01.452) 0:00:22.446 ****** 2026-01-08 00:46:12.748333 | orchestrator | =============================================================================== 2026-01-08 00:46:12.748340 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.78s 2026-01-08 00:46:12.748346 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.26s 2026-01-08 00:46:12.748352 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.66s 2026-01-08 00:46:12.748359 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.45s 2026-01-08 00:46:12.748367 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.20s 2026-01-08 00:46:12.748415 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2026-01-08 00:46:12.748423 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.08s 2026-01-08 00:46:12.748430 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.03s 2026-01-08 00:46:12.748436 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.01s 2026-01-08 00:46:12.748443 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-01-08 00:46:12.748450 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-01-08 00:46:12.748458 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-01-08 00:46:13.061217 | orchestrator | 2026-01-08 00:46:13.063119 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Jan 8 00:46:13 UTC 2026 2026-01-08 00:46:13.063167 | orchestrator | 2026-01-08 00:46:15.012676 | orchestrator | 2026-01-08 00:46:15 | INFO  | Collection nutshell is prepared for execution 2026-01-08 00:46:15.012774 | orchestrator | 2026-01-08 00:46:15 | INFO  | A [0] - dotfiles 2026-01-08 00:46:25.063475 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [0] - homer 2026-01-08 00:46:25.063547 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [0] - netdata 2026-01-08 00:46:25.063553 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [0] - openstackclient 2026-01-08 00:46:25.063559 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [0] - phpmyadmin 2026-01-08 00:46:25.063567 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [0] - common 2026-01-08 00:46:25.067809 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [1] -- loadbalancer 2026-01-08 00:46:25.067891 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [2] --- opensearch 2026-01-08 00:46:25.068255 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [2] --- mariadb-ng 2026-01-08 00:46:25.068676 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [3] ---- horizon 2026-01-08 00:46:25.069063 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [3] ---- keystone 2026-01-08 00:46:25.069247 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [4] ----- neutron 2026-01-08 00:46:25.069487 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [5] ------ wait-for-nova 2026-01-08 00:46:25.069849 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [6] ------- octavia 2026-01-08 00:46:25.071808 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [4] ----- barbican 2026-01-08 00:46:25.071950 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [4] ----- designate 2026-01-08 00:46:25.071973 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [4] ----- ironic 2026-01-08 00:46:25.072465 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [4] ----- placement 2026-01-08 00:46:25.072495 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [4] ----- magnum 2026-01-08 00:46:25.073117 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [1] -- openvswitch 2026-01-08 00:46:25.073302 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [2] --- ovn 2026-01-08 00:46:25.073820 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [1] -- memcached 2026-01-08 00:46:25.073981 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [1] -- redis 2026-01-08 00:46:25.074365 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [1] -- rabbitmq-ng 2026-01-08 00:46:25.075036 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [0] - kubernetes 2026-01-08 00:46:25.077517 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [1] -- kubeconfig 2026-01-08 00:46:25.077542 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [1] -- copy-kubeconfig 2026-01-08 00:46:25.077776 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [0] - ceph 2026-01-08 00:46:25.079962 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [1] -- ceph-pools 2026-01-08 00:46:25.080087 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [2] --- copy-ceph-keys 2026-01-08 00:46:25.080097 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [3] ---- cephclient 2026-01-08 00:46:25.080338 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-08 00:46:25.080349 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [4] ----- wait-for-keystone 2026-01-08 00:46:25.080747 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-08 00:46:25.080845 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [5] ------ glance 2026-01-08 00:46:25.080873 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [5] ------ cinder 2026-01-08 00:46:25.081201 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [5] ------ nova 2026-01-08 00:46:25.081328 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [4] ----- prometheus 2026-01-08 00:46:25.081337 | orchestrator | 2026-01-08 00:46:25 | INFO  | A [5] ------ grafana 2026-01-08 00:46:25.308072 | orchestrator | 2026-01-08 00:46:25 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-08 00:46:25.308162 | orchestrator | 2026-01-08 00:46:25 | INFO  | Tasks are running in the background 2026-01-08 00:46:28.452446 | orchestrator | 2026-01-08 00:46:28 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-08 00:46:30.585365 | orchestrator | 2026-01-08 00:46:30 | INFO  | Task e6847531-0ac1-4d59-a5d7-e47b4dc48767 is in state STARTED 2026-01-08 00:46:30.585490 | orchestrator | 2026-01-08 00:46:30 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:30.586742 | orchestrator | 2026-01-08 00:46:30 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:30.588553 | orchestrator | 2026-01-08 00:46:30 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:30.589107 | orchestrator | 2026-01-08 00:46:30 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:30.589538 | orchestrator | 2026-01-08 00:46:30 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:30.590179 | orchestrator | 2026-01-08 00:46:30 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:30.590222 | orchestrator | 2026-01-08 00:46:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:46:33.657975 | orchestrator | 2026-01-08 00:46:33 | INFO  | Task e6847531-0ac1-4d59-a5d7-e47b4dc48767 is in state STARTED 2026-01-08 00:46:33.658114 | orchestrator | 2026-01-08 00:46:33 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:33.658128 | orchestrator | 2026-01-08 00:46:33 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:33.660670 | orchestrator | 2026-01-08 00:46:33 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:33.660741 | orchestrator | 2026-01-08 00:46:33 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:33.660772 | orchestrator | 2026-01-08 00:46:33 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:33.660780 | orchestrator | 2026-01-08 00:46:33 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:33.660789 | orchestrator | 2026-01-08 00:46:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:46:36.688554 | orchestrator | 2026-01-08 00:46:36 | INFO  | Task e6847531-0ac1-4d59-a5d7-e47b4dc48767 is in state STARTED 2026-01-08 00:46:36.688642 | orchestrator | 2026-01-08 00:46:36 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:36.690309 | orchestrator | 2026-01-08 00:46:36 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:36.693981 | orchestrator | 2026-01-08 00:46:36 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:36.694305 | orchestrator | 2026-01-08 00:46:36 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:36.694959 | orchestrator | 2026-01-08 00:46:36 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:36.695725 | orchestrator | 2026-01-08 00:46:36 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:36.695811 | orchestrator | 2026-01-08 00:46:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:46:39.947038 | orchestrator | 2026-01-08 00:46:39 | INFO  | Task e6847531-0ac1-4d59-a5d7-e47b4dc48767 is in state STARTED 2026-01-08 00:46:39.947274 | orchestrator | 2026-01-08 00:46:39 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:39.947295 | orchestrator | 2026-01-08 00:46:39 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:39.947308 | orchestrator | 2026-01-08 00:46:39 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:39.947334 | orchestrator | 2026-01-08 00:46:39 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:39.949021 | orchestrator | 2026-01-08 00:46:39 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:39.949285 | orchestrator | 2026-01-08 00:46:39 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:39.949307 | orchestrator | 2026-01-08 00:46:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:46:43.082644 | orchestrator | 2026-01-08 00:46:43 | INFO  | Task e6847531-0ac1-4d59-a5d7-e47b4dc48767 is in state STARTED 2026-01-08 00:46:43.082748 | orchestrator | 2026-01-08 00:46:43 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:43.083087 | orchestrator | 2026-01-08 00:46:43 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:43.083615 | orchestrator | 2026-01-08 00:46:43 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:43.084596 | orchestrator | 2026-01-08 00:46:43 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:43.085572 | orchestrator | 2026-01-08 00:46:43 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:43.086515 | orchestrator | 2026-01-08 00:46:43 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:43.086555 | orchestrator | 2026-01-08 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:46:46.185837 | orchestrator | 2026-01-08 00:46:46 | INFO  | Task e6847531-0ac1-4d59-a5d7-e47b4dc48767 is in state STARTED 2026-01-08 00:46:46.185950 | orchestrator | 2026-01-08 00:46:46 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:46.185964 | orchestrator | 2026-01-08 00:46:46 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:46.241510 | orchestrator | 2026-01-08 00:46:46 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:46.241578 | orchestrator | 2026-01-08 00:46:46 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:46.241585 | orchestrator | 2026-01-08 00:46:46 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:46.241589 | orchestrator | 2026-01-08 00:46:46 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:46.241595 | orchestrator | 2026-01-08 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:46:49.278314 | orchestrator | 2026-01-08 00:46:49 | INFO  | Task e6847531-0ac1-4d59-a5d7-e47b4dc48767 is in state STARTED 2026-01-08 00:46:49.278902 | orchestrator | 2026-01-08 00:46:49 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:49.279363 | orchestrator | 2026-01-08 00:46:49 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:49.281328 | orchestrator | 2026-01-08 00:46:49 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:49.282823 | orchestrator | 2026-01-08 00:46:49 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:49.286849 | orchestrator | 2026-01-08 00:46:49 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:49.287735 | orchestrator | 2026-01-08 00:46:49 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:49.287758 | orchestrator | 2026-01-08 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:46:52.372233 | orchestrator | 2026-01-08 00:46:52.372279 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-08 00:46:52.372295 | orchestrator | 2026-01-08 00:46:52.372300 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-08 00:46:52.372303 | orchestrator | Thursday 08 January 2026 00:46:38 +0000 (0:00:00.281) 0:00:00.281 ****** 2026-01-08 00:46:52.372307 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:46:52.372311 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:46:52.372314 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:46:52.372317 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:46:52.372320 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:46:52.372323 | orchestrator | changed: [testbed-manager] 2026-01-08 00:46:52.372326 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:46:52.372329 | orchestrator | 2026-01-08 00:46:52.372333 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-08 00:46:52.372336 | orchestrator | Thursday 08 January 2026 00:46:41 +0000 (0:00:02.966) 0:00:03.247 ****** 2026-01-08 00:46:52.372339 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-08 00:46:52.372343 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-08 00:46:52.372346 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-08 00:46:52.372349 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-08 00:46:52.372352 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-08 00:46:52.372355 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-08 00:46:52.372358 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-08 00:46:52.372361 | orchestrator | 2026-01-08 00:46:52.372365 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-08 00:46:52.372440 | orchestrator | Thursday 08 January 2026 00:46:42 +0000 (0:00:01.688) 0:00:04.936 ****** 2026-01-08 00:46:52.372486 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-08 00:46:41.907010', 'end': '2026-01-08 00:46:41.916517', 'delta': '0:00:00.009507', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-08 00:46:52.372498 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-08 00:46:42.060237', 'end': '2026-01-08 00:46:42.067577', 'delta': '0:00:00.007340', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-08 00:46:52.372512 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-08 00:46:42.191268', 'end': '2026-01-08 00:46:42.198345', 'delta': '0:00:00.007077', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-08 00:46:52.372525 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-08 00:46:42.044666', 'end': '2026-01-08 00:46:42.051466', 'delta': '0:00:00.006800', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-08 00:46:52.372528 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-08 00:46:42.244528', 'end': '2026-01-08 00:46:42.250408', 'delta': '0:00:00.005880', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-08 00:46:52.372532 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-08 00:46:42.455125', 'end': '2026-01-08 00:46:42.461587', 'delta': '0:00:00.006462', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-08 00:46:52.372537 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-08 00:46:42.793501', 'end': '2026-01-08 00:46:42.800659', 'delta': '0:00:00.007158', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-08 00:46:52.372545 | orchestrator | 2026-01-08 00:46:52.372548 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-08 00:46:52.372552 | orchestrator | Thursday 08 January 2026 00:46:44 +0000 (0:00:01.933) 0:00:06.869 ****** 2026-01-08 00:46:52.372555 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-08 00:46:52.372558 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-08 00:46:52.372561 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-08 00:46:52.372564 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-08 00:46:52.372567 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-08 00:46:52.372570 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-08 00:46:52.372573 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-08 00:46:52.372576 | orchestrator | 2026-01-08 00:46:52.372579 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-08 00:46:52.372583 | orchestrator | Thursday 08 January 2026 00:46:47 +0000 (0:00:02.658) 0:00:09.528 ****** 2026-01-08 00:46:52.372586 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-08 00:46:52.372589 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-08 00:46:52.372592 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-08 00:46:52.372595 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-08 00:46:52.372598 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-08 00:46:52.372601 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-08 00:46:52.372604 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-08 00:46:52.372607 | orchestrator | 2026-01-08 00:46:52.372610 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:46:52.372617 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:46:52.372621 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:46:52.372624 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:46:52.372627 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:46:52.372630 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:46:52.372633 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:46:52.372636 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:46:52.372639 | orchestrator | 2026-01-08 00:46:52.372642 | orchestrator | 2026-01-08 00:46:52.372645 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:46:52.372649 | orchestrator | Thursday 08 January 2026 00:46:50 +0000 (0:00:02.498) 0:00:12.026 ****** 2026-01-08 00:46:52.372652 | orchestrator | =============================================================================== 2026-01-08 00:46:52.372655 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 2.97s 2026-01-08 00:46:52.372660 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.66s 2026-01-08 00:46:52.372663 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.50s 2026-01-08 00:46:52.372666 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.93s 2026-01-08 00:46:52.372670 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.69s 2026-01-08 00:46:52.372673 | orchestrator | 2026-01-08 00:46:52 | INFO  | Task e6847531-0ac1-4d59-a5d7-e47b4dc48767 is in state SUCCESS 2026-01-08 00:46:52.372676 | orchestrator | 2026-01-08 00:46:52 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:52.372679 | orchestrator | 2026-01-08 00:46:52 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:52.372682 | orchestrator | 2026-01-08 00:46:52 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:52.372685 | orchestrator | 2026-01-08 00:46:52 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:46:52.372690 | orchestrator | 2026-01-08 00:46:52 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:52.372693 | orchestrator | 2026-01-08 00:46:52 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:52.372696 | orchestrator | 2026-01-08 00:46:52 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:52.372700 | orchestrator | 2026-01-08 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:46:55.487029 | orchestrator | 2026-01-08 00:46:55 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:55.487094 | orchestrator | 2026-01-08 00:46:55 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:55.487101 | orchestrator | 2026-01-08 00:46:55 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:55.488353 | orchestrator | 2026-01-08 00:46:55 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:46:55.489348 | orchestrator | 2026-01-08 00:46:55 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:55.490859 | orchestrator | 2026-01-08 00:46:55 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:55.492842 | orchestrator | 2026-01-08 00:46:55 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:55.492903 | orchestrator | 2026-01-08 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:46:58.560352 | orchestrator | 2026-01-08 00:46:58 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:46:58.560506 | orchestrator | 2026-01-08 00:46:58 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:46:58.560516 | orchestrator | 2026-01-08 00:46:58 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:46:58.560522 | orchestrator | 2026-01-08 00:46:58 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:46:58.560528 | orchestrator | 2026-01-08 00:46:58 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:46:58.560534 | orchestrator | 2026-01-08 00:46:58 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:46:58.562365 | orchestrator | 2026-01-08 00:46:58 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:46:58.562455 | orchestrator | 2026-01-08 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:01.605685 | orchestrator | 2026-01-08 00:47:01 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:01.605746 | orchestrator | 2026-01-08 00:47:01 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:01.605754 | orchestrator | 2026-01-08 00:47:01 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:01.606624 | orchestrator | 2026-01-08 00:47:01 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:01.608572 | orchestrator | 2026-01-08 00:47:01 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:47:01.609523 | orchestrator | 2026-01-08 00:47:01 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:47:01.616574 | orchestrator | 2026-01-08 00:47:01 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:01.616630 | orchestrator | 2026-01-08 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:04.653200 | orchestrator | 2026-01-08 00:47:04 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:04.657186 | orchestrator | 2026-01-08 00:47:04 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:04.658505 | orchestrator | 2026-01-08 00:47:04 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:04.662259 | orchestrator | 2026-01-08 00:47:04 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:04.663251 | orchestrator | 2026-01-08 00:47:04 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:47:04.664627 | orchestrator | 2026-01-08 00:47:04 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:47:04.668547 | orchestrator | 2026-01-08 00:47:04 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:04.668594 | orchestrator | 2026-01-08 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:07.800057 | orchestrator | 2026-01-08 00:47:07 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:07.800118 | orchestrator | 2026-01-08 00:47:07 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:07.800125 | orchestrator | 2026-01-08 00:47:07 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:07.800131 | orchestrator | 2026-01-08 00:47:07 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:07.800136 | orchestrator | 2026-01-08 00:47:07 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:47:07.800141 | orchestrator | 2026-01-08 00:47:07 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:47:07.800147 | orchestrator | 2026-01-08 00:47:07 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:07.800152 | orchestrator | 2026-01-08 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:10.773615 | orchestrator | 2026-01-08 00:47:10 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:10.773667 | orchestrator | 2026-01-08 00:47:10 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:10.773674 | orchestrator | 2026-01-08 00:47:10 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:10.773679 | orchestrator | 2026-01-08 00:47:10 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:10.773795 | orchestrator | 2026-01-08 00:47:10 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:47:10.774735 | orchestrator | 2026-01-08 00:47:10 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:47:10.774997 | orchestrator | 2026-01-08 00:47:10 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:10.775009 | orchestrator | 2026-01-08 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:14.008710 | orchestrator | 2026-01-08 00:47:13 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:14.008767 | orchestrator | 2026-01-08 00:47:13 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:14.008774 | orchestrator | 2026-01-08 00:47:14 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:14.008779 | orchestrator | 2026-01-08 00:47:14 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:14.008785 | orchestrator | 2026-01-08 00:47:14 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:47:14.008790 | orchestrator | 2026-01-08 00:47:14 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state STARTED 2026-01-08 00:47:14.008811 | orchestrator | 2026-01-08 00:47:14 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:14.008817 | orchestrator | 2026-01-08 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:17.099772 | orchestrator | 2026-01-08 00:47:17 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:17.108447 | orchestrator | 2026-01-08 00:47:17 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:17.114179 | orchestrator | 2026-01-08 00:47:17 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:17.123809 | orchestrator | 2026-01-08 00:47:17 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:17.126606 | orchestrator | 2026-01-08 00:47:17 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state STARTED 2026-01-08 00:47:17.135054 | orchestrator | 2026-01-08 00:47:17 | INFO  | Task 4252b79b-4ea0-4700-a4ec-2aa59a5c83d4 is in state SUCCESS 2026-01-08 00:47:17.140563 | orchestrator | 2026-01-08 00:47:17 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:17.140661 | orchestrator | 2026-01-08 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:20.222978 | orchestrator | 2026-01-08 00:47:20 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:20.223451 | orchestrator | 2026-01-08 00:47:20 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:20.224147 | orchestrator | 2026-01-08 00:47:20 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:20.224630 | orchestrator | 2026-01-08 00:47:20 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:20.226689 | orchestrator | 2026-01-08 00:47:20 | INFO  | Task 60bdd642-23d9-44c6-8d94-7c64fe93ebad is in state SUCCESS 2026-01-08 00:47:20.228798 | orchestrator | 2026-01-08 00:47:20 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:20.228829 | orchestrator | 2026-01-08 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:23.293634 | orchestrator | 2026-01-08 00:47:23 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:23.293790 | orchestrator | 2026-01-08 00:47:23 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:23.295625 | orchestrator | 2026-01-08 00:47:23 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:23.297868 | orchestrator | 2026-01-08 00:47:23 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:23.298646 | orchestrator | 2026-01-08 00:47:23 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:23.298676 | orchestrator | 2026-01-08 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:26.340010 | orchestrator | 2026-01-08 00:47:26 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:26.343264 | orchestrator | 2026-01-08 00:47:26 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:26.346494 | orchestrator | 2026-01-08 00:47:26 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:26.349565 | orchestrator | 2026-01-08 00:47:26 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:26.350535 | orchestrator | 2026-01-08 00:47:26 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:26.350568 | orchestrator | 2026-01-08 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:29.429516 | orchestrator | 2026-01-08 00:47:29 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:29.431934 | orchestrator | 2026-01-08 00:47:29 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:29.432251 | orchestrator | 2026-01-08 00:47:29 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:29.434327 | orchestrator | 2026-01-08 00:47:29 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:29.436765 | orchestrator | 2026-01-08 00:47:29 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:29.436822 | orchestrator | 2026-01-08 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:32.505614 | orchestrator | 2026-01-08 00:47:32 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:32.506703 | orchestrator | 2026-01-08 00:47:32 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:32.508279 | orchestrator | 2026-01-08 00:47:32 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:32.510904 | orchestrator | 2026-01-08 00:47:32 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:32.511735 | orchestrator | 2026-01-08 00:47:32 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:32.511774 | orchestrator | 2026-01-08 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:35.560948 | orchestrator | 2026-01-08 00:47:35 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:35.565179 | orchestrator | 2026-01-08 00:47:35 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:35.569124 | orchestrator | 2026-01-08 00:47:35 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:35.570777 | orchestrator | 2026-01-08 00:47:35 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:35.579763 | orchestrator | 2026-01-08 00:47:35 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:35.579827 | orchestrator | 2026-01-08 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:38.720176 | orchestrator | 2026-01-08 00:47:38 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:38.723182 | orchestrator | 2026-01-08 00:47:38 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:38.725009 | orchestrator | 2026-01-08 00:47:38 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:38.733278 | orchestrator | 2026-01-08 00:47:38 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:38.733738 | orchestrator | 2026-01-08 00:47:38 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:38.733776 | orchestrator | 2026-01-08 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:41.904657 | orchestrator | 2026-01-08 00:47:41 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:41.904721 | orchestrator | 2026-01-08 00:47:41 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:41.904731 | orchestrator | 2026-01-08 00:47:41 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:41.904738 | orchestrator | 2026-01-08 00:47:41 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:41.904745 | orchestrator | 2026-01-08 00:47:41 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:41.904749 | orchestrator | 2026-01-08 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:45.002616 | orchestrator | 2026-01-08 00:47:45 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:45.010298 | orchestrator | 2026-01-08 00:47:45 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:45.013457 | orchestrator | 2026-01-08 00:47:45 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:45.015795 | orchestrator | 2026-01-08 00:47:45 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:45.020057 | orchestrator | 2026-01-08 00:47:45 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:45.020303 | orchestrator | 2026-01-08 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:48.126156 | orchestrator | 2026-01-08 00:47:48 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:48.126917 | orchestrator | 2026-01-08 00:47:48 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:48.130221 | orchestrator | 2026-01-08 00:47:48 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:48.130993 | orchestrator | 2026-01-08 00:47:48 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:48.132332 | orchestrator | 2026-01-08 00:47:48 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:48.132429 | orchestrator | 2026-01-08 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:51.171266 | orchestrator | 2026-01-08 00:47:51 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:51.171828 | orchestrator | 2026-01-08 00:47:51 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:51.173806 | orchestrator | 2026-01-08 00:47:51 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:51.174874 | orchestrator | 2026-01-08 00:47:51 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:51.175687 | orchestrator | 2026-01-08 00:47:51 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:51.176478 | orchestrator | 2026-01-08 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:54.212763 | orchestrator | 2026-01-08 00:47:54 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:54.215105 | orchestrator | 2026-01-08 00:47:54 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:54.216427 | orchestrator | 2026-01-08 00:47:54 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:54.217516 | orchestrator | 2026-01-08 00:47:54 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:54.218592 | orchestrator | 2026-01-08 00:47:54 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:54.218630 | orchestrator | 2026-01-08 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:47:57.283079 | orchestrator | 2026-01-08 00:47:57 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:47:57.283811 | orchestrator | 2026-01-08 00:47:57 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:47:57.286181 | orchestrator | 2026-01-08 00:47:57 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:47:57.286308 | orchestrator | 2026-01-08 00:47:57 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:47:57.287725 | orchestrator | 2026-01-08 00:47:57 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:47:57.287824 | orchestrator | 2026-01-08 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:00.335267 | orchestrator | 2026-01-08 00:48:00 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:48:00.335418 | orchestrator | 2026-01-08 00:48:00 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:00.336664 | orchestrator | 2026-01-08 00:48:00 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:00.338854 | orchestrator | 2026-01-08 00:48:00 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:48:00.339514 | orchestrator | 2026-01-08 00:48:00 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:00.339578 | orchestrator | 2026-01-08 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:03.383004 | orchestrator | 2026-01-08 00:48:03 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state STARTED 2026-01-08 00:48:03.383121 | orchestrator | 2026-01-08 00:48:03 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:03.383142 | orchestrator | 2026-01-08 00:48:03 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:03.386434 | orchestrator | 2026-01-08 00:48:03 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:48:03.388020 | orchestrator | 2026-01-08 00:48:03 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:03.388086 | orchestrator | 2026-01-08 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:06.442751 | orchestrator | 2026-01-08 00:48:06 | INFO  | Task bac4b674-804b-4e61-880e-c1fd9638ff3d is in state SUCCESS 2026-01-08 00:48:06.444469 | orchestrator | 2026-01-08 00:48:06.444535 | orchestrator | 2026-01-08 00:48:06.444595 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-08 00:48:06.444618 | orchestrator | 2026-01-08 00:48:06.444638 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-08 00:48:06.444657 | orchestrator | Thursday 08 January 2026 00:46:37 +0000 (0:00:00.317) 0:00:00.317 ****** 2026-01-08 00:48:06.444678 | orchestrator | ok: [testbed-manager] => { 2026-01-08 00:48:06.444701 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-08 00:48:06.444721 | orchestrator | } 2026-01-08 00:48:06.444732 | orchestrator | 2026-01-08 00:48:06.444754 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-08 00:48:06.444773 | orchestrator | Thursday 08 January 2026 00:46:38 +0000 (0:00:00.410) 0:00:00.727 ****** 2026-01-08 00:48:06.444798 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.444819 | orchestrator | 2026-01-08 00:48:06.444837 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-08 00:48:06.444854 | orchestrator | Thursday 08 January 2026 00:46:40 +0000 (0:00:02.287) 0:00:03.014 ****** 2026-01-08 00:48:06.444872 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-08 00:48:06.444891 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-08 00:48:06.444909 | orchestrator | 2026-01-08 00:48:06.444925 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-08 00:48:06.444937 | orchestrator | Thursday 08 January 2026 00:46:41 +0000 (0:00:01.101) 0:00:04.116 ****** 2026-01-08 00:48:06.444948 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.444962 | orchestrator | 2026-01-08 00:48:06.444976 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-08 00:48:06.444989 | orchestrator | Thursday 08 January 2026 00:46:43 +0000 (0:00:02.381) 0:00:06.497 ****** 2026-01-08 00:48:06.445002 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.445015 | orchestrator | 2026-01-08 00:48:06.445027 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-08 00:48:06.445041 | orchestrator | Thursday 08 January 2026 00:46:45 +0000 (0:00:01.725) 0:00:08.222 ****** 2026-01-08 00:48:06.445053 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-08 00:48:06.445066 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.445080 | orchestrator | 2026-01-08 00:48:06.445093 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-08 00:48:06.445106 | orchestrator | Thursday 08 January 2026 00:47:12 +0000 (0:00:26.473) 0:00:34.696 ****** 2026-01-08 00:48:06.445119 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.445132 | orchestrator | 2026-01-08 00:48:06.445145 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:48:06.445158 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:48:06.445173 | orchestrator | 2026-01-08 00:48:06.445188 | orchestrator | 2026-01-08 00:48:06.445200 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:48:06.445213 | orchestrator | Thursday 08 January 2026 00:47:14 +0000 (0:00:02.058) 0:00:36.754 ****** 2026-01-08 00:48:06.445227 | orchestrator | =============================================================================== 2026-01-08 00:48:06.445240 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.47s 2026-01-08 00:48:06.445252 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.38s 2026-01-08 00:48:06.445265 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.29s 2026-01-08 00:48:06.445278 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.06s 2026-01-08 00:48:06.445291 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.72s 2026-01-08 00:48:06.445455 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.10s 2026-01-08 00:48:06.445481 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.41s 2026-01-08 00:48:06.445493 | orchestrator | 2026-01-08 00:48:06.445510 | orchestrator | 2026-01-08 00:48:06.445536 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-08 00:48:06.445561 | orchestrator | 2026-01-08 00:48:06.445579 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-08 00:48:06.445600 | orchestrator | Thursday 08 January 2026 00:46:36 +0000 (0:00:00.414) 0:00:00.414 ****** 2026-01-08 00:48:06.445619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-08 00:48:06.445641 | orchestrator | 2026-01-08 00:48:06.445652 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-08 00:48:06.445663 | orchestrator | Thursday 08 January 2026 00:46:37 +0000 (0:00:00.552) 0:00:00.966 ****** 2026-01-08 00:48:06.445674 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-08 00:48:06.445685 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-08 00:48:06.445696 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-08 00:48:06.445707 | orchestrator | 2026-01-08 00:48:06.445718 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-08 00:48:06.445729 | orchestrator | Thursday 08 January 2026 00:46:39 +0000 (0:00:01.812) 0:00:02.778 ****** 2026-01-08 00:48:06.445740 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.445759 | orchestrator | 2026-01-08 00:48:06.445786 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-08 00:48:06.445806 | orchestrator | Thursday 08 January 2026 00:46:41 +0000 (0:00:02.273) 0:00:05.051 ****** 2026-01-08 00:48:06.445845 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-08 00:48:06.445864 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.445882 | orchestrator | 2026-01-08 00:48:06.445894 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-08 00:48:06.445905 | orchestrator | Thursday 08 January 2026 00:47:12 +0000 (0:00:31.508) 0:00:36.560 ****** 2026-01-08 00:48:06.445916 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.445927 | orchestrator | 2026-01-08 00:48:06.445938 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-08 00:48:06.445949 | orchestrator | Thursday 08 January 2026 00:47:14 +0000 (0:00:01.286) 0:00:37.846 ****** 2026-01-08 00:48:06.445959 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.445977 | orchestrator | 2026-01-08 00:48:06.445989 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-08 00:48:06.446000 | orchestrator | Thursday 08 January 2026 00:47:15 +0000 (0:00:00.860) 0:00:38.707 ****** 2026-01-08 00:48:06.446011 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.446116 | orchestrator | 2026-01-08 00:48:06.446144 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-08 00:48:06.446163 | orchestrator | Thursday 08 January 2026 00:47:17 +0000 (0:00:02.294) 0:00:41.002 ****** 2026-01-08 00:48:06.446180 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.446199 | orchestrator | 2026-01-08 00:48:06.446216 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-08 00:48:06.446233 | orchestrator | Thursday 08 January 2026 00:47:18 +0000 (0:00:01.035) 0:00:42.038 ****** 2026-01-08 00:48:06.446251 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.446270 | orchestrator | 2026-01-08 00:48:06.446289 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-08 00:48:06.446308 | orchestrator | Thursday 08 January 2026 00:47:18 +0000 (0:00:00.519) 0:00:42.558 ****** 2026-01-08 00:48:06.446327 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.446346 | orchestrator | 2026-01-08 00:48:06.446684 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:48:06.446756 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:48:06.446772 | orchestrator | 2026-01-08 00:48:06.446784 | orchestrator | 2026-01-08 00:48:06.446795 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:48:06.446808 | orchestrator | Thursday 08 January 2026 00:47:19 +0000 (0:00:00.534) 0:00:43.092 ****** 2026-01-08 00:48:06.446819 | orchestrator | =============================================================================== 2026-01-08 00:48:06.446831 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 31.51s 2026-01-08 00:48:06.446842 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.29s 2026-01-08 00:48:06.446853 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.27s 2026-01-08 00:48:06.446864 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.81s 2026-01-08 00:48:06.446875 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.29s 2026-01-08 00:48:06.446886 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.04s 2026-01-08 00:48:06.446897 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.86s 2026-01-08 00:48:06.446908 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.55s 2026-01-08 00:48:06.446919 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.53s 2026-01-08 00:48:06.446930 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.52s 2026-01-08 00:48:06.446941 | orchestrator | 2026-01-08 00:48:06.446952 | orchestrator | 2026-01-08 00:48:06.446963 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:48:06.446973 | orchestrator | 2026-01-08 00:48:06.446999 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 00:48:06.447010 | orchestrator | Thursday 08 January 2026 00:46:39 +0000 (0:00:00.856) 0:00:00.856 ****** 2026-01-08 00:48:06.447022 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-08 00:48:06.447034 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-08 00:48:06.447045 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-08 00:48:06.447057 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-08 00:48:06.447068 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-08 00:48:06.447079 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-08 00:48:06.447089 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-08 00:48:06.447100 | orchestrator | 2026-01-08 00:48:06.447111 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-08 00:48:06.447122 | orchestrator | 2026-01-08 00:48:06.447133 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-08 00:48:06.447144 | orchestrator | Thursday 08 January 2026 00:46:40 +0000 (0:00:01.620) 0:00:02.477 ****** 2026-01-08 00:48:06.447176 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:48:06.447196 | orchestrator | 2026-01-08 00:48:06.447208 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-08 00:48:06.447219 | orchestrator | Thursday 08 January 2026 00:46:42 +0000 (0:00:02.025) 0:00:04.503 ****** 2026-01-08 00:48:06.447230 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:48:06.447242 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:48:06.447253 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:48:06.447264 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:48:06.447275 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:48:06.447317 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:48:06.447338 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.447349 | orchestrator | 2026-01-08 00:48:06.447394 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-08 00:48:06.447405 | orchestrator | Thursday 08 January 2026 00:46:44 +0000 (0:00:01.791) 0:00:06.294 ****** 2026-01-08 00:48:06.447416 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.447427 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:48:06.447438 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:48:06.447449 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:48:06.447460 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:48:06.447470 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:48:06.447481 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:48:06.447492 | orchestrator | 2026-01-08 00:48:06.447503 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-08 00:48:06.447514 | orchestrator | Thursday 08 January 2026 00:46:48 +0000 (0:00:04.391) 0:00:10.686 ****** 2026-01-08 00:48:06.447525 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.447537 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:48:06.447548 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:48:06.447559 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:48:06.447570 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:48:06.447580 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:48:06.447591 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:48:06.447602 | orchestrator | 2026-01-08 00:48:06.447613 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-08 00:48:06.447624 | orchestrator | Thursday 08 January 2026 00:46:51 +0000 (0:00:02.833) 0:00:13.519 ****** 2026-01-08 00:48:06.447635 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:48:06.447646 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:48:06.447657 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:48:06.447668 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:48:06.447679 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:48:06.447690 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:48:06.447700 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.447711 | orchestrator | 2026-01-08 00:48:06.447722 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-08 00:48:06.447734 | orchestrator | Thursday 08 January 2026 00:47:00 +0000 (0:00:09.137) 0:00:22.656 ****** 2026-01-08 00:48:06.447745 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:48:06.447756 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:48:06.447767 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:48:06.447777 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:48:06.447788 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:48:06.447799 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:48:06.447810 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.447821 | orchestrator | 2026-01-08 00:48:06.447832 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-08 00:48:06.447843 | orchestrator | Thursday 08 January 2026 00:47:41 +0000 (0:00:40.482) 0:01:03.138 ****** 2026-01-08 00:48:06.447855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:48:06.447868 | orchestrator | 2026-01-08 00:48:06.447879 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-08 00:48:06.447890 | orchestrator | Thursday 08 January 2026 00:47:43 +0000 (0:00:02.487) 0:01:05.626 ****** 2026-01-08 00:48:06.447901 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-08 00:48:06.447912 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-08 00:48:06.447923 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-08 00:48:06.447934 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-08 00:48:06.447945 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-08 00:48:06.447956 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-08 00:48:06.447974 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-08 00:48:06.447985 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-08 00:48:06.447996 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-08 00:48:06.448006 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-08 00:48:06.448017 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-08 00:48:06.448439 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-08 00:48:06.448472 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-08 00:48:06.448483 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-08 00:48:06.448494 | orchestrator | 2026-01-08 00:48:06.448505 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-08 00:48:06.448518 | orchestrator | Thursday 08 January 2026 00:47:51 +0000 (0:00:07.187) 0:01:12.813 ****** 2026-01-08 00:48:06.448529 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.448541 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:48:06.448552 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:48:06.448563 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:48:06.448574 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:48:06.448585 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:48:06.448596 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:48:06.448607 | orchestrator | 2026-01-08 00:48:06.448618 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-08 00:48:06.448629 | orchestrator | Thursday 08 January 2026 00:47:52 +0000 (0:00:01.778) 0:01:14.592 ****** 2026-01-08 00:48:06.448640 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.448651 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:48:06.448662 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:48:06.448673 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:48:06.448684 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:48:06.448694 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:48:06.448705 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:48:06.448716 | orchestrator | 2026-01-08 00:48:06.448727 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-08 00:48:06.448752 | orchestrator | Thursday 08 January 2026 00:47:54 +0000 (0:00:01.514) 0:01:16.106 ****** 2026-01-08 00:48:06.448764 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:48:06.448775 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.448786 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:48:06.448797 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:48:06.448808 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:48:06.448826 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:48:06.448842 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:48:06.448871 | orchestrator | 2026-01-08 00:48:06.448889 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-08 00:48:06.448905 | orchestrator | Thursday 08 January 2026 00:47:55 +0000 (0:00:01.371) 0:01:17.477 ****** 2026-01-08 00:48:06.448920 | orchestrator | ok: [testbed-manager] 2026-01-08 00:48:06.448938 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:48:06.448962 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:48:06.448980 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:48:06.448996 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:48:06.449014 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:48:06.449031 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:48:06.449045 | orchestrator | 2026-01-08 00:48:06.449057 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-08 00:48:06.449068 | orchestrator | Thursday 08 January 2026 00:47:57 +0000 (0:00:02.124) 0:01:19.602 ****** 2026-01-08 00:48:06.449080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-08 00:48:06.449095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:48:06.449125 | orchestrator | 2026-01-08 00:48:06.449137 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-08 00:48:06.449150 | orchestrator | Thursday 08 January 2026 00:47:59 +0000 (0:00:01.518) 0:01:21.120 ****** 2026-01-08 00:48:06.449162 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.449173 | orchestrator | 2026-01-08 00:48:06.449184 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-08 00:48:06.449195 | orchestrator | Thursday 08 January 2026 00:48:01 +0000 (0:00:02.247) 0:01:23.368 ****** 2026-01-08 00:48:06.449207 | orchestrator | changed: [testbed-manager] 2026-01-08 00:48:06.449218 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:48:06.449229 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:48:06.449241 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:48:06.449252 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:48:06.449264 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:48:06.449276 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:48:06.449285 | orchestrator | 2026-01-08 00:48:06.449295 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:48:06.449305 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:48:06.449316 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:48:06.449326 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:48:06.449336 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:48:06.449346 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:48:06.449382 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:48:06.449392 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:48:06.449402 | orchestrator | 2026-01-08 00:48:06.449412 | orchestrator | 2026-01-08 00:48:06.449422 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:48:06.449432 | orchestrator | Thursday 08 January 2026 00:48:05 +0000 (0:00:03.620) 0:01:26.988 ****** 2026-01-08 00:48:06.449441 | orchestrator | =============================================================================== 2026-01-08 00:48:06.449451 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.48s 2026-01-08 00:48:06.449461 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.14s 2026-01-08 00:48:06.449471 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.19s 2026-01-08 00:48:06.449480 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.39s 2026-01-08 00:48:06.449490 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.62s 2026-01-08 00:48:06.449500 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.83s 2026-01-08 00:48:06.449509 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.49s 2026-01-08 00:48:06.449519 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.25s 2026-01-08 00:48:06.449528 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.12s 2026-01-08 00:48:06.449538 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.03s 2026-01-08 00:48:06.449548 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.79s 2026-01-08 00:48:06.449572 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.78s 2026-01-08 00:48:06.449582 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.62s 2026-01-08 00:48:06.449592 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.52s 2026-01-08 00:48:06.449602 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.51s 2026-01-08 00:48:06.449612 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.37s 2026-01-08 00:48:06.449627 | orchestrator | 2026-01-08 00:48:06 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:06.449637 | orchestrator | 2026-01-08 00:48:06 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:06.449886 | orchestrator | 2026-01-08 00:48:06 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:48:06.454509 | orchestrator | 2026-01-08 00:48:06 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:06.454588 | orchestrator | 2026-01-08 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:09.507609 | orchestrator | 2026-01-08 00:48:09 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:09.509025 | orchestrator | 2026-01-08 00:48:09 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:09.511143 | orchestrator | 2026-01-08 00:48:09 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:48:09.514264 | orchestrator | 2026-01-08 00:48:09 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:09.515743 | orchestrator | 2026-01-08 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:12.562139 | orchestrator | 2026-01-08 00:48:12 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:12.563760 | orchestrator | 2026-01-08 00:48:12 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:12.565610 | orchestrator | 2026-01-08 00:48:12 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:48:12.566520 | orchestrator | 2026-01-08 00:48:12 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:12.566561 | orchestrator | 2026-01-08 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:15.613077 | orchestrator | 2026-01-08 00:48:15 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:15.616752 | orchestrator | 2026-01-08 00:48:15 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:15.619505 | orchestrator | 2026-01-08 00:48:15 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:48:15.621189 | orchestrator | 2026-01-08 00:48:15 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:15.621230 | orchestrator | 2026-01-08 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:18.664258 | orchestrator | 2026-01-08 00:48:18 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:18.665097 | orchestrator | 2026-01-08 00:48:18 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:18.666316 | orchestrator | 2026-01-08 00:48:18 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state STARTED 2026-01-08 00:48:18.668161 | orchestrator | 2026-01-08 00:48:18 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:18.668193 | orchestrator | 2026-01-08 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:21.720085 | orchestrator | 2026-01-08 00:48:21 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:21.721128 | orchestrator | 2026-01-08 00:48:21 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:21.721633 | orchestrator | 2026-01-08 00:48:21 | INFO  | Task 66a14461-dd50-4c1e-a99e-c1f75e11fdd6 is in state SUCCESS 2026-01-08 00:48:21.723746 | orchestrator | 2026-01-08 00:48:21 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:21.723780 | orchestrator | 2026-01-08 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:24.769718 | orchestrator | 2026-01-08 00:48:24 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:24.772808 | orchestrator | 2026-01-08 00:48:24 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:24.775456 | orchestrator | 2026-01-08 00:48:24 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:24.776408 | orchestrator | 2026-01-08 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:27.820995 | orchestrator | 2026-01-08 00:48:27 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:27.822589 | orchestrator | 2026-01-08 00:48:27 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:27.822636 | orchestrator | 2026-01-08 00:48:27 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:27.822641 | orchestrator | 2026-01-08 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:30.886439 | orchestrator | 2026-01-08 00:48:30 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:30.889065 | orchestrator | 2026-01-08 00:48:30 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:30.891491 | orchestrator | 2026-01-08 00:48:30 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:30.891531 | orchestrator | 2026-01-08 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:33.941904 | orchestrator | 2026-01-08 00:48:33 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:33.944357 | orchestrator | 2026-01-08 00:48:33 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:33.945811 | orchestrator | 2026-01-08 00:48:33 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:33.945835 | orchestrator | 2026-01-08 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:36.979791 | orchestrator | 2026-01-08 00:48:36 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:36.983010 | orchestrator | 2026-01-08 00:48:36 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:36.984489 | orchestrator | 2026-01-08 00:48:36 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:36.984538 | orchestrator | 2026-01-08 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:40.027474 | orchestrator | 2026-01-08 00:48:40 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:40.028332 | orchestrator | 2026-01-08 00:48:40 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:40.030068 | orchestrator | 2026-01-08 00:48:40 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:40.030228 | orchestrator | 2026-01-08 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:43.073218 | orchestrator | 2026-01-08 00:48:43 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:43.075450 | orchestrator | 2026-01-08 00:48:43 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:43.077399 | orchestrator | 2026-01-08 00:48:43 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:43.077435 | orchestrator | 2026-01-08 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:46.111704 | orchestrator | 2026-01-08 00:48:46 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:46.113966 | orchestrator | 2026-01-08 00:48:46 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:46.117003 | orchestrator | 2026-01-08 00:48:46 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:46.117127 | orchestrator | 2026-01-08 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:49.165311 | orchestrator | 2026-01-08 00:48:49 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:49.168568 | orchestrator | 2026-01-08 00:48:49 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:49.173035 | orchestrator | 2026-01-08 00:48:49 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:49.173097 | orchestrator | 2026-01-08 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:52.225338 | orchestrator | 2026-01-08 00:48:52 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:52.227733 | orchestrator | 2026-01-08 00:48:52 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:52.229312 | orchestrator | 2026-01-08 00:48:52 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:52.229540 | orchestrator | 2026-01-08 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:55.270794 | orchestrator | 2026-01-08 00:48:55 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:55.272906 | orchestrator | 2026-01-08 00:48:55 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:55.273568 | orchestrator | 2026-01-08 00:48:55 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:55.273600 | orchestrator | 2026-01-08 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:48:58.323149 | orchestrator | 2026-01-08 00:48:58 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:48:58.323937 | orchestrator | 2026-01-08 00:48:58 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state STARTED 2026-01-08 00:48:58.325527 | orchestrator | 2026-01-08 00:48:58 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:48:58.325568 | orchestrator | 2026-01-08 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:01.369208 | orchestrator | 2026-01-08 00:49:01 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:01.374979 | orchestrator | 2026-01-08 00:49:01 | INFO  | Task 81977b22-84ac-4bf5-91b3-f652a31a46de is in state SUCCESS 2026-01-08 00:49:01.377312 | orchestrator | 2026-01-08 00:49:01.377426 | orchestrator | 2026-01-08 00:49:01.377505 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-08 00:49:01.377511 | orchestrator | 2026-01-08 00:49:01.377617 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-08 00:49:01.377623 | orchestrator | Thursday 08 January 2026 00:46:55 +0000 (0:00:00.190) 0:00:00.190 ****** 2026-01-08 00:49:01.377627 | orchestrator | ok: [testbed-manager] 2026-01-08 00:49:01.377632 | orchestrator | 2026-01-08 00:49:01.377636 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-08 00:49:01.377640 | orchestrator | Thursday 08 January 2026 00:46:56 +0000 (0:00:01.251) 0:00:01.441 ****** 2026-01-08 00:49:01.377644 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-08 00:49:01.377649 | orchestrator | 2026-01-08 00:49:01.377653 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-08 00:49:01.377656 | orchestrator | Thursday 08 January 2026 00:46:57 +0000 (0:00:00.724) 0:00:02.166 ****** 2026-01-08 00:49:01.377660 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.377664 | orchestrator | 2026-01-08 00:49:01.377668 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-08 00:49:01.377672 | orchestrator | Thursday 08 January 2026 00:46:58 +0000 (0:00:00.968) 0:00:03.135 ****** 2026-01-08 00:49:01.377676 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-08 00:49:01.377680 | orchestrator | ok: [testbed-manager] 2026-01-08 00:49:01.377686 | orchestrator | 2026-01-08 00:49:01.377692 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-08 00:49:01.377698 | orchestrator | Thursday 08 January 2026 00:48:13 +0000 (0:01:14.764) 0:01:17.899 ****** 2026-01-08 00:49:01.377704 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.377710 | orchestrator | 2026-01-08 00:49:01.377716 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:49:01.377722 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:49:01.378294 | orchestrator | 2026-01-08 00:49:01.378301 | orchestrator | 2026-01-08 00:49:01.378308 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:49:01.378314 | orchestrator | Thursday 08 January 2026 00:48:19 +0000 (0:00:06.007) 0:01:23.908 ****** 2026-01-08 00:49:01.378321 | orchestrator | =============================================================================== 2026-01-08 00:49:01.378326 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 74.76s 2026-01-08 00:49:01.378333 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.01s 2026-01-08 00:49:01.378355 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.25s 2026-01-08 00:49:01.378361 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.97s 2026-01-08 00:49:01.378368 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.72s 2026-01-08 00:49:01.378374 | orchestrator | 2026-01-08 00:49:01.378380 | orchestrator | 2026-01-08 00:49:01.378387 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-08 00:49:01.378392 | orchestrator | 2026-01-08 00:49:01.378398 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-08 00:49:01.378404 | orchestrator | Thursday 08 January 2026 00:46:30 +0000 (0:00:00.360) 0:00:00.360 ****** 2026-01-08 00:49:01.378411 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:49:01.378419 | orchestrator | 2026-01-08 00:49:01.378425 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-08 00:49:01.378432 | orchestrator | Thursday 08 January 2026 00:46:31 +0000 (0:00:01.213) 0:00:01.573 ****** 2026-01-08 00:49:01.378436 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-08 00:49:01.378441 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-08 00:49:01.378445 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-08 00:49:01.378458 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-08 00:49:01.378463 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-08 00:49:01.378466 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-08 00:49:01.378476 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-08 00:49:01.378482 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-08 00:49:01.378488 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-08 00:49:01.378496 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-08 00:49:01.378504 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-08 00:49:01.378511 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-08 00:49:01.378517 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-08 00:49:01.378522 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-08 00:49:01.378528 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-08 00:49:01.378533 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-08 00:49:01.378582 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-08 00:49:01.378590 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-08 00:49:01.378596 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-08 00:49:01.378603 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-08 00:49:01.378608 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-08 00:49:01.378614 | orchestrator | 2026-01-08 00:49:01.378620 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-08 00:49:01.378625 | orchestrator | Thursday 08 January 2026 00:46:35 +0000 (0:00:04.027) 0:00:05.601 ****** 2026-01-08 00:49:01.378652 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:49:01.378660 | orchestrator | 2026-01-08 00:49:01.378666 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-08 00:49:01.378672 | orchestrator | Thursday 08 January 2026 00:46:36 +0000 (0:00:01.239) 0:00:06.841 ****** 2026-01-08 00:49:01.378682 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.378692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.378698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.378711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.378721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.378752 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.378780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.378787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378797 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378894 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.378901 | orchestrator | 2026-01-08 00:49:01.378909 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-08 00:49:01.378915 | orchestrator | Thursday 08 January 2026 00:46:42 +0000 (0:00:05.601) 0:00:12.443 ****** 2026-01-08 00:49:01.378940 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.378946 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.378951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.378959 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.378964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.378969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.378976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.378981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.378998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379004 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:49:01.379009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379022 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:49:01.379027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379036 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:49:01.379041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379045 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:49:01.379052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379088 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:49:01.379095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379100 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:49:01.379106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379120 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:49:01.379126 | orchestrator | 2026-01-08 00:49:01.379132 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-08 00:49:01.379139 | orchestrator | Thursday 08 January 2026 00:46:45 +0000 (0:00:03.390) 0:00:15.833 ****** 2026-01-08 00:49:01.379149 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379186 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379219 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379229 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:49:01.379238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379246 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:49:01.379251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379258 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:49:01.379263 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:49:01.379269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.379294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379328 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:49:01.379334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379400 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:49:01.379407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.379414 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:49:01.379420 | orchestrator | 2026-01-08 00:49:01.379426 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-01-08 00:49:01.379432 | orchestrator | Thursday 08 January 2026 00:46:50 +0000 (0:00:04.323) 0:00:20.157 ****** 2026-01-08 00:49:01.379438 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:49:01.379444 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:49:01.379447 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:49:01.379451 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:49:01.379455 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:49:01.379459 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:49:01.379463 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:49:01.379466 | orchestrator | 2026-01-08 00:49:01.379470 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-08 00:49:01.379475 | orchestrator | Thursday 08 January 2026 00:46:51 +0000 (0:00:00.899) 0:00:21.056 ****** 2026-01-08 00:49:01.379478 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:49:01.379482 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:49:01.379486 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:49:01.379490 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:49:01.379494 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:49:01.379498 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:49:01.379502 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:49:01.379505 | orchestrator | 2026-01-08 00:49:01.379509 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-08 00:49:01.379513 | orchestrator | Thursday 08 January 2026 00:46:52 +0000 (0:00:01.152) 0:00:22.209 ****** 2026-01-08 00:49:01.379517 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:49:01.379521 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:49:01.379524 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:49:01.379528 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:49:01.379532 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:49:01.379536 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:49:01.379540 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:49:01.379548 | orchestrator | 2026-01-08 00:49:01.379552 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-01-08 00:49:01.379556 | orchestrator | Thursday 08 January 2026 00:46:53 +0000 (0:00:01.267) 0:00:23.476 ****** 2026-01-08 00:49:01.379560 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.379564 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:01.379568 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:01.379571 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:49:01.379575 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:01.379579 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:49:01.379583 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:49:01.379587 | orchestrator | 2026-01-08 00:49:01.379591 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-08 00:49:01.379594 | orchestrator | Thursday 08 January 2026 00:46:56 +0000 (0:00:02.649) 0:00:26.126 ****** 2026-01-08 00:49:01.379603 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.379608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.379612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.379616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.379624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.379628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.379642 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.379662 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379666 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379697 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379711 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.379721 | orchestrator | 2026-01-08 00:49:01.379729 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-08 00:49:01.379738 | orchestrator | Thursday 08 January 2026 00:47:01 +0000 (0:00:05.010) 0:00:31.136 ****** 2026-01-08 00:49:01.379744 | orchestrator | [WARNING]: Skipped 2026-01-08 00:49:01.379750 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-08 00:49:01.379757 | orchestrator | to this access issue: 2026-01-08 00:49:01.379763 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-08 00:49:01.379769 | orchestrator | directory 2026-01-08 00:49:01.379775 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 00:49:01.379780 | orchestrator | 2026-01-08 00:49:01.379786 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-08 00:49:01.379792 | orchestrator | Thursday 08 January 2026 00:47:02 +0000 (0:00:01.229) 0:00:32.366 ****** 2026-01-08 00:49:01.379798 | orchestrator | [WARNING]: Skipped 2026-01-08 00:49:01.379804 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-08 00:49:01.379810 | orchestrator | to this access issue: 2026-01-08 00:49:01.379815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-08 00:49:01.379821 | orchestrator | directory 2026-01-08 00:49:01.379827 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 00:49:01.379832 | orchestrator | 2026-01-08 00:49:01.379839 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-08 00:49:01.379848 | orchestrator | Thursday 08 January 2026 00:47:03 +0000 (0:00:00.917) 0:00:33.284 ****** 2026-01-08 00:49:01.379854 | orchestrator | [WARNING]: Skipped 2026-01-08 00:49:01.379860 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-08 00:49:01.379865 | orchestrator | to this access issue: 2026-01-08 00:49:01.379871 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-08 00:49:01.379878 | orchestrator | directory 2026-01-08 00:49:01.379882 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 00:49:01.379885 | orchestrator | 2026-01-08 00:49:01.379889 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-08 00:49:01.379894 | orchestrator | Thursday 08 January 2026 00:47:04 +0000 (0:00:01.392) 0:00:34.676 ****** 2026-01-08 00:49:01.379900 | orchestrator | [WARNING]: Skipped 2026-01-08 00:49:01.379905 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-08 00:49:01.379911 | orchestrator | to this access issue: 2026-01-08 00:49:01.379917 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-08 00:49:01.379923 | orchestrator | directory 2026-01-08 00:49:01.379929 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 00:49:01.379935 | orchestrator | 2026-01-08 00:49:01.379945 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-08 00:49:01.379951 | orchestrator | Thursday 08 January 2026 00:47:05 +0000 (0:00:01.012) 0:00:35.689 ****** 2026-01-08 00:49:01.379957 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:01.379963 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.379969 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:49:01.379975 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:01.379982 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:49:01.379988 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:01.379993 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:49:01.380000 | orchestrator | 2026-01-08 00:49:01.380005 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-08 00:49:01.380009 | orchestrator | Thursday 08 January 2026 00:47:09 +0000 (0:00:03.856) 0:00:39.546 ****** 2026-01-08 00:49:01.380018 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-08 00:49:01.380022 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-08 00:49:01.380026 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-08 00:49:01.380030 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-08 00:49:01.380034 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-08 00:49:01.380038 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-08 00:49:01.380041 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-08 00:49:01.380045 | orchestrator | 2026-01-08 00:49:01.380049 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-08 00:49:01.380053 | orchestrator | Thursday 08 January 2026 00:47:12 +0000 (0:00:03.176) 0:00:42.722 ****** 2026-01-08 00:49:01.380057 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:01.380060 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:01.380064 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:49:01.380068 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.380072 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:01.380075 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:49:01.380079 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:49:01.380083 | orchestrator | 2026-01-08 00:49:01.380087 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-08 00:49:01.380091 | orchestrator | Thursday 08 January 2026 00:47:15 +0000 (0:00:02.392) 0:00:45.115 ****** 2026-01-08 00:49:01.380095 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380106 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380123 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380127 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380131 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380138 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380146 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380171 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380185 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380199 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380203 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380212 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380218 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380228 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380243 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380249 | orchestrator | 2026-01-08 00:49:01.380255 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-08 00:49:01.380261 | orchestrator | Thursday 08 January 2026 00:47:16 +0000 (0:00:01.547) 0:00:46.663 ****** 2026-01-08 00:49:01.380267 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-08 00:49:01.380272 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-08 00:49:01.380278 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-08 00:49:01.380284 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-08 00:49:01.380289 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-08 00:49:01.380295 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-08 00:49:01.380301 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-08 00:49:01.380306 | orchestrator | 2026-01-08 00:49:01.380312 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-08 00:49:01.380318 | orchestrator | Thursday 08 January 2026 00:47:19 +0000 (0:00:02.595) 0:00:49.259 ****** 2026-01-08 00:49:01.380324 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-08 00:49:01.380330 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-08 00:49:01.380336 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-08 00:49:01.380368 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-08 00:49:01.380374 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-08 00:49:01.380380 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-08 00:49:01.380384 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-08 00:49:01.380388 | orchestrator | 2026-01-08 00:49:01.380391 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-01-08 00:49:01.380395 | orchestrator | Thursday 08 January 2026 00:47:21 +0000 (0:00:02.089) 0:00:51.348 ****** 2026-01-08 00:49:01.380399 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-08 00:49:01.380447 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380468 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380472 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380477 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380485 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:49:01.380523 | orchestrator | 2026-01-08 00:49:01.380527 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-01-08 00:49:01.380531 | orchestrator | Thursday 08 January 2026 00:47:24 +0000 (0:00:03.496) 0:00:54.845 ****** 2026-01-08 00:49:01.380535 | orchestrator | changed: [testbed-manager] => { 2026-01-08 00:49:01.380539 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:01.380543 | orchestrator | } 2026-01-08 00:49:01.380547 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:49:01.380551 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:01.380555 | orchestrator | } 2026-01-08 00:49:01.380558 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:49:01.380562 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:01.380566 | orchestrator | } 2026-01-08 00:49:01.380570 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:49:01.380575 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:01.380581 | orchestrator | } 2026-01-08 00:49:01.380587 | orchestrator | changed: [testbed-node-3] => { 2026-01-08 00:49:01.380592 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:01.380597 | orchestrator | } 2026-01-08 00:49:01.380603 | orchestrator | changed: [testbed-node-4] => { 2026-01-08 00:49:01.380608 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:01.380614 | orchestrator | } 2026-01-08 00:49:01.380625 | orchestrator | changed: [testbed-node-5] => { 2026-01-08 00:49:01.380630 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:01.380636 | orchestrator | } 2026-01-08 00:49:01.380642 | orchestrator | 2026-01-08 00:49:01.380648 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:49:01.380655 | orchestrator | Thursday 08 January 2026 00:47:25 +0000 (0:00:01.074) 0:00:55.919 ****** 2026-01-08 00:49:01.380661 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.380668 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380678 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.380692 | orchestrator | skipping: [testbed-manager] 2026-01-08 00:49:01.380704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380718 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:49:01.380724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.380735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380748 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:49:01.380755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.380765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380777 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:49:01.380786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.380792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380812 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:49:01.380818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.380824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380841 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:49:01.380855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-08 00:49:01.380869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2025.1', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:49:01.380891 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:49:01.380900 | orchestrator | 2026-01-08 00:49:01.380909 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-08 00:49:01.380916 | orchestrator | Thursday 08 January 2026 00:47:27 +0000 (0:00:01.883) 0:00:57.803 ****** 2026-01-08 00:49:01.380922 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.380933 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:01.380942 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:01.380950 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:01.380956 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:49:01.380962 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:49:01.380968 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:49:01.380974 | orchestrator | 2026-01-08 00:49:01.380981 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-08 00:49:01.380987 | orchestrator | Thursday 08 January 2026 00:47:29 +0000 (0:00:01.813) 0:00:59.617 ****** 2026-01-08 00:49:01.380993 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.380998 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:01.381005 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:01.381011 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:01.381017 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:49:01.381022 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:49:01.381028 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:49:01.381034 | orchestrator | 2026-01-08 00:49:01.381039 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-08 00:49:01.381045 | orchestrator | Thursday 08 January 2026 00:47:30 +0000 (0:00:01.294) 0:01:00.911 ****** 2026-01-08 00:49:01.381051 | orchestrator | 2026-01-08 00:49:01.381057 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-08 00:49:01.381063 | orchestrator | Thursday 08 January 2026 00:47:31 +0000 (0:00:00.068) 0:01:00.980 ****** 2026-01-08 00:49:01.381069 | orchestrator | 2026-01-08 00:49:01.381074 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-08 00:49:01.381078 | orchestrator | Thursday 08 January 2026 00:47:31 +0000 (0:00:00.064) 0:01:01.044 ****** 2026-01-08 00:49:01.381082 | orchestrator | 2026-01-08 00:49:01.381086 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-08 00:49:01.381090 | orchestrator | Thursday 08 January 2026 00:47:31 +0000 (0:00:00.239) 0:01:01.284 ****** 2026-01-08 00:49:01.381093 | orchestrator | 2026-01-08 00:49:01.381097 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-08 00:49:01.381101 | orchestrator | Thursday 08 January 2026 00:47:31 +0000 (0:00:00.077) 0:01:01.362 ****** 2026-01-08 00:49:01.381105 | orchestrator | 2026-01-08 00:49:01.381109 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-08 00:49:01.381112 | orchestrator | Thursday 08 January 2026 00:47:31 +0000 (0:00:00.071) 0:01:01.433 ****** 2026-01-08 00:49:01.381116 | orchestrator | 2026-01-08 00:49:01.381120 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-08 00:49:01.381124 | orchestrator | Thursday 08 January 2026 00:47:31 +0000 (0:00:00.099) 0:01:01.533 ****** 2026-01-08 00:49:01.381128 | orchestrator | 2026-01-08 00:49:01.381131 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-08 00:49:01.381135 | orchestrator | Thursday 08 January 2026 00:47:31 +0000 (0:00:00.118) 0:01:01.651 ****** 2026-01-08 00:49:01.381139 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:01.381143 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:49:01.381147 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:01.381151 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:01.381154 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:49:01.381158 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:49:01.381162 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.381166 | orchestrator | 2026-01-08 00:49:01.381170 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-08 00:49:01.381182 | orchestrator | Thursday 08 January 2026 00:48:05 +0000 (0:00:33.660) 0:01:35.311 ****** 2026-01-08 00:49:01.381186 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:01.381190 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:49:01.381194 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:01.381198 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:49:01.381201 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:49:01.381205 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:01.381209 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.381213 | orchestrator | 2026-01-08 00:49:01.381217 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-08 00:49:01.381221 | orchestrator | Thursday 08 January 2026 00:48:47 +0000 (0:00:42.386) 0:02:17.698 ****** 2026-01-08 00:49:01.381224 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:49:01.381229 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:49:01.381233 | orchestrator | ok: [testbed-manager] 2026-01-08 00:49:01.381236 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:49:01.381240 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:49:01.381244 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:49:01.381248 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:49:01.381252 | orchestrator | 2026-01-08 00:49:01.381255 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-08 00:49:01.381259 | orchestrator | Thursday 08 January 2026 00:48:50 +0000 (0:00:02.345) 0:02:20.043 ****** 2026-01-08 00:49:01.381267 | orchestrator | changed: [testbed-manager] 2026-01-08 00:49:01.381271 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:01.381275 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:01.381279 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:49:01.381283 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:49:01.381287 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:01.381291 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:49:01.381294 | orchestrator | 2026-01-08 00:49:01.381298 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:49:01.381303 | orchestrator | testbed-manager : ok=24  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:49:01.381308 | orchestrator | testbed-node-0 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:49:01.381312 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:49:01.381316 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:49:01.381320 | orchestrator | testbed-node-3 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:49:01.381324 | orchestrator | testbed-node-4 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:49:01.381328 | orchestrator | testbed-node-5 : ok=20  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:49:01.381332 | orchestrator | 2026-01-08 00:49:01.381336 | orchestrator | 2026-01-08 00:49:01.381361 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:49:01.381365 | orchestrator | Thursday 08 January 2026 00:49:00 +0000 (0:00:10.546) 0:02:30.590 ****** 2026-01-08 00:49:01.381369 | orchestrator | =============================================================================== 2026-01-08 00:49:01.381373 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 42.39s 2026-01-08 00:49:01.381377 | orchestrator | common : Restart fluentd container ------------------------------------- 33.66s 2026-01-08 00:49:01.381385 | orchestrator | common : Restart cron container ---------------------------------------- 10.55s 2026-01-08 00:49:01.381389 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.60s 2026-01-08 00:49:01.381393 | orchestrator | common : Copying over config.json files for services -------------------- 5.01s 2026-01-08 00:49:01.381396 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 4.32s 2026-01-08 00:49:01.381400 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.03s 2026-01-08 00:49:01.381404 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.86s 2026-01-08 00:49:01.381408 | orchestrator | service-check-containers : common | Check containers -------------------- 3.50s 2026-01-08 00:49:01.381412 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.39s 2026-01-08 00:49:01.381416 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.18s 2026-01-08 00:49:01.381420 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.65s 2026-01-08 00:49:01.381424 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.60s 2026-01-08 00:49:01.381428 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.39s 2026-01-08 00:49:01.381432 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.35s 2026-01-08 00:49:01.381435 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.09s 2026-01-08 00:49:01.381439 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.88s 2026-01-08 00:49:01.381443 | orchestrator | common : Creating log volume -------------------------------------------- 1.81s 2026-01-08 00:49:01.381447 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.55s 2026-01-08 00:49:01.381454 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.39s 2026-01-08 00:49:01.381458 | orchestrator | 2026-01-08 00:49:01 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:01.381462 | orchestrator | 2026-01-08 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:04.412265 | orchestrator | 2026-01-08 00:49:04 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:04.413020 | orchestrator | 2026-01-08 00:49:04 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:04.413922 | orchestrator | 2026-01-08 00:49:04 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:04.414787 | orchestrator | 2026-01-08 00:49:04 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:04.415611 | orchestrator | 2026-01-08 00:49:04 | INFO  | Task 6001bba7-6b9e-4d2b-9ff0-e3b51d74ff2b is in state STARTED 2026-01-08 00:49:04.416284 | orchestrator | 2026-01-08 00:49:04 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:04.416310 | orchestrator | 2026-01-08 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:07.456919 | orchestrator | 2026-01-08 00:49:07 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:07.459216 | orchestrator | 2026-01-08 00:49:07 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:07.461936 | orchestrator | 2026-01-08 00:49:07 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:07.464465 | orchestrator | 2026-01-08 00:49:07 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:07.467291 | orchestrator | 2026-01-08 00:49:07 | INFO  | Task 6001bba7-6b9e-4d2b-9ff0-e3b51d74ff2b is in state STARTED 2026-01-08 00:49:07.468053 | orchestrator | 2026-01-08 00:49:07 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:07.468133 | orchestrator | 2026-01-08 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:10.493396 | orchestrator | 2026-01-08 00:49:10 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:10.494129 | orchestrator | 2026-01-08 00:49:10 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:10.494807 | orchestrator | 2026-01-08 00:49:10 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:10.495450 | orchestrator | 2026-01-08 00:49:10 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:10.495980 | orchestrator | 2026-01-08 00:49:10 | INFO  | Task 6001bba7-6b9e-4d2b-9ff0-e3b51d74ff2b is in state STARTED 2026-01-08 00:49:10.496644 | orchestrator | 2026-01-08 00:49:10 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:10.496719 | orchestrator | 2026-01-08 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:13.554185 | orchestrator | 2026-01-08 00:49:13 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:13.554238 | orchestrator | 2026-01-08 00:49:13 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:13.554245 | orchestrator | 2026-01-08 00:49:13 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:13.555176 | orchestrator | 2026-01-08 00:49:13 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:13.555216 | orchestrator | 2026-01-08 00:49:13 | INFO  | Task 6001bba7-6b9e-4d2b-9ff0-e3b51d74ff2b is in state STARTED 2026-01-08 00:49:13.556607 | orchestrator | 2026-01-08 00:49:13 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:13.557241 | orchestrator | 2026-01-08 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:16.625149 | orchestrator | 2026-01-08 00:49:16 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:16.630188 | orchestrator | 2026-01-08 00:49:16 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:16.637095 | orchestrator | 2026-01-08 00:49:16 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:16.643787 | orchestrator | 2026-01-08 00:49:16 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:16.643877 | orchestrator | 2026-01-08 00:49:16 | INFO  | Task 6001bba7-6b9e-4d2b-9ff0-e3b51d74ff2b is in state STARTED 2026-01-08 00:49:16.644578 | orchestrator | 2026-01-08 00:49:16 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:16.644611 | orchestrator | 2026-01-08 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:19.680095 | orchestrator | 2026-01-08 00:49:19 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:19.680765 | orchestrator | 2026-01-08 00:49:19 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:19.682143 | orchestrator | 2026-01-08 00:49:19 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:19.684134 | orchestrator | 2026-01-08 00:49:19 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:19.684870 | orchestrator | 2026-01-08 00:49:19 | INFO  | Task 6001bba7-6b9e-4d2b-9ff0-e3b51d74ff2b is in state STARTED 2026-01-08 00:49:19.685982 | orchestrator | 2026-01-08 00:49:19 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:19.686067 | orchestrator | 2026-01-08 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:22.832836 | orchestrator | 2026-01-08 00:49:22 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:22.832912 | orchestrator | 2026-01-08 00:49:22 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:22.832920 | orchestrator | 2026-01-08 00:49:22 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:22.833719 | orchestrator | 2026-01-08 00:49:22 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:22.834630 | orchestrator | 2026-01-08 00:49:22 | INFO  | Task 6001bba7-6b9e-4d2b-9ff0-e3b51d74ff2b is in state STARTED 2026-01-08 00:49:22.835628 | orchestrator | 2026-01-08 00:49:22 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:22.835667 | orchestrator | 2026-01-08 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:25.879074 | orchestrator | 2026-01-08 00:49:25.879127 | orchestrator | 2026-01-08 00:49:25.879133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:49:25.879140 | orchestrator | 2026-01-08 00:49:25.879145 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 00:49:25.879151 | orchestrator | Thursday 08 January 2026 00:49:07 +0000 (0:00:00.413) 0:00:00.413 ****** 2026-01-08 00:49:25.879156 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:49:25.879163 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:49:25.879168 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:49:25.879173 | orchestrator | 2026-01-08 00:49:25.879179 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 00:49:25.879184 | orchestrator | Thursday 08 January 2026 00:49:07 +0000 (0:00:00.611) 0:00:01.025 ****** 2026-01-08 00:49:25.879190 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-08 00:49:25.879196 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-08 00:49:25.879201 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-08 00:49:25.879207 | orchestrator | 2026-01-08 00:49:25.879212 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-08 00:49:25.879218 | orchestrator | 2026-01-08 00:49:25.879223 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-08 00:49:25.879228 | orchestrator | Thursday 08 January 2026 00:49:08 +0000 (0:00:00.533) 0:00:01.558 ****** 2026-01-08 00:49:25.879234 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:49:25.879240 | orchestrator | 2026-01-08 00:49:25.879245 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-08 00:49:25.879251 | orchestrator | Thursday 08 January 2026 00:49:09 +0000 (0:00:00.986) 0:00:02.544 ****** 2026-01-08 00:49:25.879256 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-08 00:49:25.879262 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-08 00:49:25.879267 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-08 00:49:25.879272 | orchestrator | 2026-01-08 00:49:25.879278 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-08 00:49:25.879283 | orchestrator | Thursday 08 January 2026 00:49:10 +0000 (0:00:00.913) 0:00:03.458 ****** 2026-01-08 00:49:25.879289 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-08 00:49:25.879294 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-08 00:49:25.879299 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-08 00:49:25.879304 | orchestrator | 2026-01-08 00:49:25.879310 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-01-08 00:49:25.879315 | orchestrator | Thursday 08 January 2026 00:49:12 +0000 (0:00:02.588) 0:00:06.046 ****** 2026-01-08 00:49:25.879375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-08 00:49:25.879383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-08 00:49:25.879397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-08 00:49:25.879403 | orchestrator | 2026-01-08 00:49:25.879408 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-01-08 00:49:25.879413 | orchestrator | Thursday 08 January 2026 00:49:14 +0000 (0:00:01.494) 0:00:07.540 ****** 2026-01-08 00:49:25.879418 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:49:25.879423 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:25.879427 | orchestrator | } 2026-01-08 00:49:25.879432 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:49:25.879436 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:25.879442 | orchestrator | } 2026-01-08 00:49:25.879447 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:49:25.879452 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:25.879458 | orchestrator | } 2026-01-08 00:49:25.879463 | orchestrator | 2026-01-08 00:49:25.879468 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:49:25.879474 | orchestrator | Thursday 08 January 2026 00:49:14 +0000 (0:00:00.490) 0:00:08.031 ****** 2026-01-08 00:49:25.879480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-08 00:49:25.879490 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:49:25.879496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-08 00:49:25.879504 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:49:25.879510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-08 00:49:25.879516 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:49:25.879522 | orchestrator | 2026-01-08 00:49:25.879527 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-08 00:49:25.879532 | orchestrator | Thursday 08 January 2026 00:49:16 +0000 (0:00:01.607) 0:00:09.639 ****** 2026-01-08 00:49:25.879538 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:25.879544 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:25.879550 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:25.879556 | orchestrator | 2026-01-08 00:49:25.879561 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:49:25.879567 | orchestrator | testbed-node-0 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:49:25.879574 | orchestrator | testbed-node-1 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:49:25.879580 | orchestrator | testbed-node-2 : ok=8  changed=5  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:49:25.879586 | orchestrator | 2026-01-08 00:49:25.879591 | orchestrator | 2026-01-08 00:49:25.879596 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:49:25.879602 | orchestrator | Thursday 08 January 2026 00:49:23 +0000 (0:00:07.267) 0:00:16.906 ****** 2026-01-08 00:49:25.879611 | orchestrator | =============================================================================== 2026-01-08 00:49:25.879617 | orchestrator | memcached : Restart memcached container --------------------------------- 7.27s 2026-01-08 00:49:25.879623 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.59s 2026-01-08 00:49:25.879629 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.61s 2026-01-08 00:49:25.879634 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.49s 2026-01-08 00:49:25.879640 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.99s 2026-01-08 00:49:25.879659 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.91s 2026-01-08 00:49:25.879665 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.61s 2026-01-08 00:49:25.879674 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-01-08 00:49:25.879680 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.49s 2026-01-08 00:49:25.879686 | orchestrator | 2026-01-08 00:49:25 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:25.879692 | orchestrator | 2026-01-08 00:49:25 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:25.879698 | orchestrator | 2026-01-08 00:49:25 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:25.879704 | orchestrator | 2026-01-08 00:49:25 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:25.879710 | orchestrator | 2026-01-08 00:49:25 | INFO  | Task 6001bba7-6b9e-4d2b-9ff0-e3b51d74ff2b is in state SUCCESS 2026-01-08 00:49:25.879716 | orchestrator | 2026-01-08 00:49:25 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:25.879721 | orchestrator | 2026-01-08 00:49:25 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:25.879727 | orchestrator | 2026-01-08 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:28.915414 | orchestrator | 2026-01-08 00:49:28 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:28.919115 | orchestrator | 2026-01-08 00:49:28 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:28.919168 | orchestrator | 2026-01-08 00:49:28 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:28.919461 | orchestrator | 2026-01-08 00:49:28 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:28.920545 | orchestrator | 2026-01-08 00:49:28 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:28.920874 | orchestrator | 2026-01-08 00:49:28 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:28.920900 | orchestrator | 2026-01-08 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:32.185932 | orchestrator | 2026-01-08 00:49:32 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:32.186061 | orchestrator | 2026-01-08 00:49:32 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:32.186284 | orchestrator | 2026-01-08 00:49:32 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:32.189646 | orchestrator | 2026-01-08 00:49:32 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:32.189710 | orchestrator | 2026-01-08 00:49:32 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:32.189719 | orchestrator | 2026-01-08 00:49:32 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:32.189728 | orchestrator | 2026-01-08 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:35.462303 | orchestrator | 2026-01-08 00:49:35 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state STARTED 2026-01-08 00:49:35.462486 | orchestrator | 2026-01-08 00:49:35 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:35.462500 | orchestrator | 2026-01-08 00:49:35 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:35.465501 | orchestrator | 2026-01-08 00:49:35 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:35.465656 | orchestrator | 2026-01-08 00:49:35 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:35.465666 | orchestrator | 2026-01-08 00:49:35 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:35.465672 | orchestrator | 2026-01-08 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:38.492677 | orchestrator | 2026-01-08 00:49:38 | INFO  | Task faa2994d-ad8d-4d3f-974c-3ef69ee5c0d0 is in state SUCCESS 2026-01-08 00:49:38.492791 | orchestrator | 2026-01-08 00:49:38.494193 | orchestrator | 2026-01-08 00:49:38.494233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:49:38.494239 | orchestrator | 2026-01-08 00:49:38.494245 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 00:49:38.494250 | orchestrator | Thursday 08 January 2026 00:49:08 +0000 (0:00:00.472) 0:00:00.472 ****** 2026-01-08 00:49:38.494256 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:49:38.494262 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:49:38.494267 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:49:38.494273 | orchestrator | 2026-01-08 00:49:38.494278 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 00:49:38.494284 | orchestrator | Thursday 08 January 2026 00:49:09 +0000 (0:00:00.549) 0:00:01.022 ****** 2026-01-08 00:49:38.494289 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-08 00:49:38.494294 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-08 00:49:38.494299 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-08 00:49:38.494304 | orchestrator | 2026-01-08 00:49:38.494310 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-08 00:49:38.494341 | orchestrator | 2026-01-08 00:49:38.494347 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-08 00:49:38.494375 | orchestrator | Thursday 08 January 2026 00:49:09 +0000 (0:00:00.635) 0:00:01.658 ****** 2026-01-08 00:49:38.494380 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:49:38.494383 | orchestrator | 2026-01-08 00:49:38.494386 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-08 00:49:38.494390 | orchestrator | Thursday 08 January 2026 00:49:10 +0000 (0:00:00.724) 0:00:02.382 ****** 2026-01-08 00:49:38.494395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494453 | orchestrator | 2026-01-08 00:49:38.494456 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-08 00:49:38.494459 | orchestrator | Thursday 08 January 2026 00:49:11 +0000 (0:00:01.480) 0:00:03.863 ****** 2026-01-08 00:49:38.494465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494510 | orchestrator | 2026-01-08 00:49:38.494514 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-08 00:49:38.494519 | orchestrator | Thursday 08 January 2026 00:49:14 +0000 (0:00:03.072) 0:00:06.935 ****** 2026-01-08 00:49:38.494524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494580 | orchestrator | 2026-01-08 00:49:38.494586 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-01-08 00:49:38.494591 | orchestrator | Thursday 08 January 2026 00:49:17 +0000 (0:00:02.986) 0:00:09.922 ****** 2026-01-08 00:49:38.494596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-08 00:49:38.494630 | orchestrator | 2026-01-08 00:49:38.494633 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-01-08 00:49:38.494636 | orchestrator | Thursday 08 January 2026 00:49:19 +0000 (0:00:01.738) 0:00:11.661 ****** 2026-01-08 00:49:38.494640 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:49:38.494643 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:38.494646 | orchestrator | } 2026-01-08 00:49:38.494650 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:49:38.494653 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:38.494656 | orchestrator | } 2026-01-08 00:49:38.494660 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:49:38.494663 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:49:38.494666 | orchestrator | } 2026-01-08 00:49:38.494669 | orchestrator | 2026-01-08 00:49:38.494673 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:49:38.494676 | orchestrator | Thursday 08 January 2026 00:49:20 +0000 (0:00:00.380) 0:00:12.041 ****** 2026-01-08 00:49:38.494679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-08 00:49:38.494684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-08 00:49:38.494690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-08 00:49:38.494693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-08 00:49:38.494697 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:49:38.494700 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:49:38.494703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2025.1', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-08 00:49:38.494709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2025.1', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-08 00:49:38.494712 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:49:38.494715 | orchestrator | 2026-01-08 00:49:38.494719 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-08 00:49:38.494722 | orchestrator | Thursday 08 January 2026 00:49:21 +0000 (0:00:01.708) 0:00:13.750 ****** 2026-01-08 00:49:38.494725 | orchestrator | 2026-01-08 00:49:38.494728 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-08 00:49:38.494731 | orchestrator | Thursday 08 January 2026 00:49:21 +0000 (0:00:00.079) 0:00:13.829 ****** 2026-01-08 00:49:38.494734 | orchestrator | 2026-01-08 00:49:38.494738 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-08 00:49:38.494743 | orchestrator | Thursday 08 January 2026 00:49:21 +0000 (0:00:00.095) 0:00:13.924 ****** 2026-01-08 00:49:38.494748 | orchestrator | 2026-01-08 00:49:38.494753 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-08 00:49:38.494758 | orchestrator | Thursday 08 January 2026 00:49:22 +0000 (0:00:00.081) 0:00:14.005 ****** 2026-01-08 00:49:38.494763 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:38.494768 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:38.494773 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:38.494778 | orchestrator | 2026-01-08 00:49:38.494783 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-08 00:49:38.494789 | orchestrator | Thursday 08 January 2026 00:49:26 +0000 (0:00:04.237) 0:00:18.243 ****** 2026-01-08 00:49:38.494818 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:49:38.494825 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:49:38.494831 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:49:38.494836 | orchestrator | 2026-01-08 00:49:38.494842 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:49:38.494847 | orchestrator | testbed-node-0 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:49:38.494852 | orchestrator | testbed-node-1 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:49:38.494859 | orchestrator | testbed-node-2 : ok=10  changed=7  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:49:38.494862 | orchestrator | 2026-01-08 00:49:38.494866 | orchestrator | 2026-01-08 00:49:38.494870 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:49:38.494873 | orchestrator | Thursday 08 January 2026 00:49:35 +0000 (0:00:09.493) 0:00:27.737 ****** 2026-01-08 00:49:38.494877 | orchestrator | =============================================================================== 2026-01-08 00:49:38.494880 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.49s 2026-01-08 00:49:38.494884 | orchestrator | redis : Restart redis container ----------------------------------------- 4.24s 2026-01-08 00:49:38.494888 | orchestrator | redis : Copying over default config.json files -------------------------- 3.07s 2026-01-08 00:49:38.494891 | orchestrator | redis : Copying over redis config files --------------------------------- 2.99s 2026-01-08 00:49:38.494895 | orchestrator | service-check-containers : redis | Check containers --------------------- 1.74s 2026-01-08 00:49:38.494898 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.71s 2026-01-08 00:49:38.494902 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.48s 2026-01-08 00:49:38.494906 | orchestrator | redis : include_tasks --------------------------------------------------- 0.72s 2026-01-08 00:49:38.494909 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-01-08 00:49:38.494913 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.55s 2026-01-08 00:49:38.494917 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.38s 2026-01-08 00:49:38.494920 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.26s 2026-01-08 00:49:38.495216 | orchestrator | 2026-01-08 00:49:38 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:38.496820 | orchestrator | 2026-01-08 00:49:38 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:38.498505 | orchestrator | 2026-01-08 00:49:38 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:38.499652 | orchestrator | 2026-01-08 00:49:38 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:38.500782 | orchestrator | 2026-01-08 00:49:38 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:38.501515 | orchestrator | 2026-01-08 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:41.541579 | orchestrator | 2026-01-08 00:49:41 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:41.541638 | orchestrator | 2026-01-08 00:49:41 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:41.541647 | orchestrator | 2026-01-08 00:49:41 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:41.541702 | orchestrator | 2026-01-08 00:49:41 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:41.541710 | orchestrator | 2026-01-08 00:49:41 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:41.541718 | orchestrator | 2026-01-08 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:44.614048 | orchestrator | 2026-01-08 00:49:44 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:44.614129 | orchestrator | 2026-01-08 00:49:44 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:44.614134 | orchestrator | 2026-01-08 00:49:44 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:44.614139 | orchestrator | 2026-01-08 00:49:44 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:44.614143 | orchestrator | 2026-01-08 00:49:44 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:44.614147 | orchestrator | 2026-01-08 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:47.757056 | orchestrator | 2026-01-08 00:49:47 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:47.757609 | orchestrator | 2026-01-08 00:49:47 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:47.759987 | orchestrator | 2026-01-08 00:49:47 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:47.760678 | orchestrator | 2026-01-08 00:49:47 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:47.761822 | orchestrator | 2026-01-08 00:49:47 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:47.761868 | orchestrator | 2026-01-08 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:50.867066 | orchestrator | 2026-01-08 00:49:50 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:50.868616 | orchestrator | 2026-01-08 00:49:50 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:50.868663 | orchestrator | 2026-01-08 00:49:50 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:50.868670 | orchestrator | 2026-01-08 00:49:50 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:50.870719 | orchestrator | 2026-01-08 00:49:50 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:50.870769 | orchestrator | 2026-01-08 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:53.911386 | orchestrator | 2026-01-08 00:49:53 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:53.912108 | orchestrator | 2026-01-08 00:49:53 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:53.913099 | orchestrator | 2026-01-08 00:49:53 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:53.914662 | orchestrator | 2026-01-08 00:49:53 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:53.915261 | orchestrator | 2026-01-08 00:49:53 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:53.915437 | orchestrator | 2026-01-08 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:56.949354 | orchestrator | 2026-01-08 00:49:56 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:56.950997 | orchestrator | 2026-01-08 00:49:56 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:56.952268 | orchestrator | 2026-01-08 00:49:56 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:49:56.954262 | orchestrator | 2026-01-08 00:49:56 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:49:56.955209 | orchestrator | 2026-01-08 00:49:56 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:49:56.955233 | orchestrator | 2026-01-08 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:49:59.995656 | orchestrator | 2026-01-08 00:49:59 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:49:59.996905 | orchestrator | 2026-01-08 00:49:59 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:49:59.998880 | orchestrator | 2026-01-08 00:50:00 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:50:00.003061 | orchestrator | 2026-01-08 00:50:00 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:00.006728 | orchestrator | 2026-01-08 00:50:00 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:00.006912 | orchestrator | 2026-01-08 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:03.055648 | orchestrator | 2026-01-08 00:50:03 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:03.058055 | orchestrator | 2026-01-08 00:50:03 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:03.059696 | orchestrator | 2026-01-08 00:50:03 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:50:03.061603 | orchestrator | 2026-01-08 00:50:03 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:03.063220 | orchestrator | 2026-01-08 00:50:03 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:03.063427 | orchestrator | 2026-01-08 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:06.102441 | orchestrator | 2026-01-08 00:50:06 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:06.103064 | orchestrator | 2026-01-08 00:50:06 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:06.105640 | orchestrator | 2026-01-08 00:50:06 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:50:06.106502 | orchestrator | 2026-01-08 00:50:06 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:06.108430 | orchestrator | 2026-01-08 00:50:06 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:06.108469 | orchestrator | 2026-01-08 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:09.162183 | orchestrator | 2026-01-08 00:50:09 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:09.163556 | orchestrator | 2026-01-08 00:50:09 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:09.164488 | orchestrator | 2026-01-08 00:50:09 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:50:09.167841 | orchestrator | 2026-01-08 00:50:09 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:09.168230 | orchestrator | 2026-01-08 00:50:09 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:09.168250 | orchestrator | 2026-01-08 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:12.204228 | orchestrator | 2026-01-08 00:50:12 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:12.205398 | orchestrator | 2026-01-08 00:50:12 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:12.206671 | orchestrator | 2026-01-08 00:50:12 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:50:12.207787 | orchestrator | 2026-01-08 00:50:12 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:12.209743 | orchestrator | 2026-01-08 00:50:12 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:12.210048 | orchestrator | 2026-01-08 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:15.251754 | orchestrator | 2026-01-08 00:50:15 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:15.253595 | orchestrator | 2026-01-08 00:50:15 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:15.255658 | orchestrator | 2026-01-08 00:50:15 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:50:15.256823 | orchestrator | 2026-01-08 00:50:15 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:15.258143 | orchestrator | 2026-01-08 00:50:15 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:15.258251 | orchestrator | 2026-01-08 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:18.293591 | orchestrator | 2026-01-08 00:50:18 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:18.294037 | orchestrator | 2026-01-08 00:50:18 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:18.295039 | orchestrator | 2026-01-08 00:50:18 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state STARTED 2026-01-08 00:50:18.296678 | orchestrator | 2026-01-08 00:50:18 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:18.298485 | orchestrator | 2026-01-08 00:50:18 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:18.298514 | orchestrator | 2026-01-08 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:21.332145 | orchestrator | 2026-01-08 00:50:21 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:21.334163 | orchestrator | 2026-01-08 00:50:21 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:21.337173 | orchestrator | 2026-01-08 00:50:21 | INFO  | Task 9cddd670-9df8-4107-b6e4-1521d8ef19e8 is in state SUCCESS 2026-01-08 00:50:21.339342 | orchestrator | 2026-01-08 00:50:21.339578 | orchestrator | 2026-01-08 00:50:21.339606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:50:21.339615 | orchestrator | 2026-01-08 00:50:21.339623 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 00:50:21.339650 | orchestrator | Thursday 08 January 2026 00:49:07 +0000 (0:00:00.334) 0:00:00.334 ****** 2026-01-08 00:50:21.339659 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:50:21.339668 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:50:21.339676 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:50:21.339684 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:50:21.339691 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:50:21.339699 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:50:21.339707 | orchestrator | 2026-01-08 00:50:21.339715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 00:50:21.339723 | orchestrator | Thursday 08 January 2026 00:49:08 +0000 (0:00:01.028) 0:00:01.363 ****** 2026-01-08 00:50:21.339731 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-08 00:50:21.339740 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-08 00:50:21.339758 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-08 00:50:21.339766 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-08 00:50:21.339774 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-08 00:50:21.339782 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-08 00:50:21.339790 | orchestrator | 2026-01-08 00:50:21.339798 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-08 00:50:21.339806 | orchestrator | 2026-01-08 00:50:21.339814 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-08 00:50:21.339822 | orchestrator | Thursday 08 January 2026 00:49:09 +0000 (0:00:00.905) 0:00:02.268 ****** 2026-01-08 00:50:21.339831 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:50:21.339839 | orchestrator | 2026-01-08 00:50:21.339848 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-08 00:50:21.339856 | orchestrator | Thursday 08 January 2026 00:49:11 +0000 (0:00:01.851) 0:00:04.119 ****** 2026-01-08 00:50:21.339864 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-08 00:50:21.339872 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-08 00:50:21.339880 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-08 00:50:21.339888 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-08 00:50:21.339896 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-08 00:50:21.339904 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-08 00:50:21.339912 | orchestrator | 2026-01-08 00:50:21.339920 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-08 00:50:21.339928 | orchestrator | Thursday 08 January 2026 00:49:13 +0000 (0:00:02.120) 0:00:06.239 ****** 2026-01-08 00:50:21.339936 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-08 00:50:21.339944 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-08 00:50:21.339952 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-08 00:50:21.339960 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-08 00:50:21.339968 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-08 00:50:21.339976 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-08 00:50:21.339984 | orchestrator | 2026-01-08 00:50:21.339992 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-08 00:50:21.340000 | orchestrator | Thursday 08 January 2026 00:49:15 +0000 (0:00:02.166) 0:00:08.406 ****** 2026-01-08 00:50:21.340008 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-08 00:50:21.340018 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-08 00:50:21.340033 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:50:21.340046 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-08 00:50:21.340068 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:50:21.340082 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-08 00:50:21.340096 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:50:21.340111 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-08 00:50:21.340125 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:50:21.340138 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:50:21.340152 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-08 00:50:21.340166 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:50:21.340179 | orchestrator | 2026-01-08 00:50:21.340193 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-08 00:50:21.340207 | orchestrator | Thursday 08 January 2026 00:49:17 +0000 (0:00:01.766) 0:00:10.173 ****** 2026-01-08 00:50:21.340219 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:50:21.340228 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:50:21.340237 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:50:21.340246 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:50:21.340255 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:50:21.340265 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:50:21.340274 | orchestrator | 2026-01-08 00:50:21.340305 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-08 00:50:21.340314 | orchestrator | Thursday 08 January 2026 00:49:18 +0000 (0:00:00.912) 0:00:11.085 ****** 2026-01-08 00:50:21.340340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340544 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340584 | orchestrator | 2026-01-08 00:50:21.340598 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-08 00:50:21.340607 | orchestrator | Thursday 08 January 2026 00:49:20 +0000 (0:00:01.698) 0:00:12.784 ****** 2026-01-08 00:50:21.340620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340694 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340735 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340770 | orchestrator | 2026-01-08 00:50:21.340782 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-08 00:50:21.340796 | orchestrator | Thursday 08 January 2026 00:49:24 +0000 (0:00:04.120) 0:00:16.904 ****** 2026-01-08 00:50:21.340810 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:50:21.340825 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:50:21.340839 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:50:21.340852 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:50:21.340866 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:50:21.340881 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:50:21.340895 | orchestrator | 2026-01-08 00:50:21.340914 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-01-08 00:50:21.340922 | orchestrator | Thursday 08 January 2026 00:49:25 +0000 (0:00:01.606) 0:00:18.511 ****** 2026-01-08 00:50:21.340931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.340990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.341004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.341012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.341020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-08 00:50:21.341029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.341042 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.341060 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-08 00:50:21.341079 | orchestrator | 2026-01-08 00:50:21.341091 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-01-08 00:50:21.341102 | orchestrator | Thursday 08 January 2026 00:49:29 +0000 (0:00:03.997) 0:00:22.508 ****** 2026-01-08 00:50:21.341114 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:50:21.341125 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:50:21.341137 | orchestrator | } 2026-01-08 00:50:21.341149 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:50:21.341161 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:50:21.341173 | orchestrator | } 2026-01-08 00:50:21.341185 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:50:21.341196 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:50:21.341208 | orchestrator | } 2026-01-08 00:50:21.341219 | orchestrator | changed: [testbed-node-3] => { 2026-01-08 00:50:21.341231 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:50:21.341244 | orchestrator | } 2026-01-08 00:50:21.341255 | orchestrator | changed: [testbed-node-4] => { 2026-01-08 00:50:21.341268 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:50:21.341312 | orchestrator | } 2026-01-08 00:50:21.341329 | orchestrator | changed: [testbed-node-5] => { 2026-01-08 00:50:21.341344 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:50:21.341357 | orchestrator | } 2026-01-08 00:50:21.341366 | orchestrator | 2026-01-08 00:50:21.341373 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:50:21.341382 | orchestrator | Thursday 08 January 2026 00:49:31 +0000 (0:00:01.520) 0:00:24.028 ****** 2026-01-08 00:50:21.341391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-08 00:50:21.341400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-08 00:50:21.341408 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:50:21.341425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-08 00:50:21.341445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-08 00:50:21.341454 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:50:21.341463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-08 00:50:21.341471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-08 00:50:21.341480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-08 00:50:21.341488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-08 00:50:21.341496 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:50:21.341505 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:50:21.341518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-08 00:50:21.341535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-08 00:50:21.341544 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:50:21.341552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-08 00:50:21.341561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-08 00:50:21.341569 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:50:21.341577 | orchestrator | 2026-01-08 00:50:21.341585 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-08 00:50:21.341722 | orchestrator | Thursday 08 January 2026 00:49:32 +0000 (0:00:01.307) 0:00:25.336 ****** 2026-01-08 00:50:21.341740 | orchestrator | 2026-01-08 00:50:21.341754 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-08 00:50:21.341768 | orchestrator | Thursday 08 January 2026 00:49:33 +0000 (0:00:00.224) 0:00:25.561 ****** 2026-01-08 00:50:21.341782 | orchestrator | 2026-01-08 00:50:21.341796 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-08 00:50:21.341808 | orchestrator | Thursday 08 January 2026 00:49:33 +0000 (0:00:00.182) 0:00:25.743 ****** 2026-01-08 00:50:21.341823 | orchestrator | 2026-01-08 00:50:21.341837 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-08 00:50:21.341851 | orchestrator | Thursday 08 January 2026 00:49:33 +0000 (0:00:00.156) 0:00:25.900 ****** 2026-01-08 00:50:21.341865 | orchestrator | 2026-01-08 00:50:21.341879 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-08 00:50:21.341961 | orchestrator | Thursday 08 January 2026 00:49:33 +0000 (0:00:00.265) 0:00:26.166 ****** 2026-01-08 00:50:21.341973 | orchestrator | 2026-01-08 00:50:21.341981 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-08 00:50:21.341989 | orchestrator | Thursday 08 January 2026 00:49:33 +0000 (0:00:00.207) 0:00:26.374 ****** 2026-01-08 00:50:21.341996 | orchestrator | 2026-01-08 00:50:21.342004 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-08 00:50:21.342053 | orchestrator | Thursday 08 January 2026 00:49:33 +0000 (0:00:00.130) 0:00:26.504 ****** 2026-01-08 00:50:21.342064 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:50:21.342073 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:50:21.342081 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:50:21.342088 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:50:21.342096 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:50:21.342104 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:50:21.342112 | orchestrator | 2026-01-08 00:50:21.342120 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-08 00:50:21.342137 | orchestrator | Thursday 08 January 2026 00:49:44 +0000 (0:00:10.219) 0:00:36.724 ****** 2026-01-08 00:50:21.342146 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:50:21.342155 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:50:21.342162 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:50:21.342170 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:50:21.342178 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:50:21.342186 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:50:21.342194 | orchestrator | 2026-01-08 00:50:21.342202 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-08 00:50:21.342210 | orchestrator | Thursday 08 January 2026 00:49:47 +0000 (0:00:02.874) 0:00:39.599 ****** 2026-01-08 00:50:21.342218 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:50:21.342226 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:50:21.342234 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:50:21.342242 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:50:21.342249 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:50:21.342257 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:50:21.342265 | orchestrator | 2026-01-08 00:50:21.342273 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-08 00:50:21.342374 | orchestrator | Thursday 08 January 2026 00:49:57 +0000 (0:00:10.010) 0:00:49.610 ****** 2026-01-08 00:50:21.342391 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-08 00:50:21.342401 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-08 00:50:21.342411 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-08 00:50:21.342421 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-08 00:50:21.342430 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-08 00:50:21.342439 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-08 00:50:21.342449 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-08 00:50:21.342458 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-08 00:50:21.342467 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-08 00:50:21.342477 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-08 00:50:21.342486 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-08 00:50:21.342501 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-08 00:50:21.342511 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-08 00:50:21.342520 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-08 00:50:21.342529 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-08 00:50:21.342538 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-08 00:50:21.342565 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-08 00:50:21.342574 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-08 00:50:21.342583 | orchestrator | 2026-01-08 00:50:21.342592 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-08 00:50:21.342601 | orchestrator | Thursday 08 January 2026 00:50:04 +0000 (0:00:07.011) 0:00:56.621 ****** 2026-01-08 00:50:21.342611 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-08 00:50:21.342621 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:50:21.342630 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-08 00:50:21.342639 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:50:21.342648 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-08 00:50:21.342658 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:50:21.342667 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-08 00:50:21.342677 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-08 00:50:21.342686 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-08 00:50:21.342695 | orchestrator | 2026-01-08 00:50:21.342706 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-08 00:50:21.342715 | orchestrator | Thursday 08 January 2026 00:50:06 +0000 (0:00:02.466) 0:00:59.087 ****** 2026-01-08 00:50:21.342723 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-08 00:50:21.342731 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:50:21.342739 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-08 00:50:21.342747 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:50:21.342755 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-08 00:50:21.342763 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:50:21.342771 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-08 00:50:21.342785 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-08 00:50:21.342793 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-08 00:50:21.342801 | orchestrator | 2026-01-08 00:50:21.342809 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-08 00:50:21.342821 | orchestrator | Thursday 08 January 2026 00:50:10 +0000 (0:00:03.713) 0:01:02.801 ****** 2026-01-08 00:50:21.342835 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:50:21.342848 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:50:21.342861 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:50:21.342876 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:50:21.342890 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:50:21.342904 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:50:21.342917 | orchestrator | 2026-01-08 00:50:21.342931 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:50:21.342938 | orchestrator | testbed-node-0 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-08 00:50:21.342954 | orchestrator | testbed-node-1 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-08 00:50:21.342962 | orchestrator | testbed-node-2 : ok=16  changed=12  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-08 00:50:21.342969 | orchestrator | testbed-node-3 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:50:21.342975 | orchestrator | testbed-node-4 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:50:21.342982 | orchestrator | testbed-node-5 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:50:21.342989 | orchestrator | 2026-01-08 00:50:21.342995 | orchestrator | 2026-01-08 00:50:21.343002 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:50:21.343009 | orchestrator | Thursday 08 January 2026 00:50:18 +0000 (0:00:08.121) 0:01:10.922 ****** 2026-01-08 00:50:21.343016 | orchestrator | =============================================================================== 2026-01-08 00:50:21.343023 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.13s 2026-01-08 00:50:21.343029 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.22s 2026-01-08 00:50:21.343036 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.01s 2026-01-08 00:50:21.343043 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.12s 2026-01-08 00:50:21.343052 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 4.00s 2026-01-08 00:50:21.343066 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.71s 2026-01-08 00:50:21.343081 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.87s 2026-01-08 00:50:21.343091 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.47s 2026-01-08 00:50:21.343102 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.17s 2026-01-08 00:50:21.343112 | orchestrator | module-load : Load modules ---------------------------------------------- 2.12s 2026-01-08 00:50:21.343123 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.85s 2026-01-08 00:50:21.343132 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.77s 2026-01-08 00:50:21.343142 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.70s 2026-01-08 00:50:21.343153 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.61s 2026-01-08 00:50:21.343164 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.52s 2026-01-08 00:50:21.343175 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.31s 2026-01-08 00:50:21.343186 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.17s 2026-01-08 00:50:21.343197 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.03s 2026-01-08 00:50:21.343208 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.91s 2026-01-08 00:50:21.343216 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2026-01-08 00:50:21.343223 | orchestrator | 2026-01-08 00:50:21 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:21.343230 | orchestrator | 2026-01-08 00:50:21 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:21.343893 | orchestrator | 2026-01-08 00:50:21 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:21.343940 | orchestrator | 2026-01-08 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:24.379571 | orchestrator | 2026-01-08 00:50:24 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:24.383500 | orchestrator | 2026-01-08 00:50:24 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:24.384980 | orchestrator | 2026-01-08 00:50:24 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:24.385920 | orchestrator | 2026-01-08 00:50:24 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:24.387047 | orchestrator | 2026-01-08 00:50:24 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:24.387091 | orchestrator | 2026-01-08 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:27.430965 | orchestrator | 2026-01-08 00:50:27 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:27.433736 | orchestrator | 2026-01-08 00:50:27 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:27.434487 | orchestrator | 2026-01-08 00:50:27 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:27.436551 | orchestrator | 2026-01-08 00:50:27 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:27.436590 | orchestrator | 2026-01-08 00:50:27 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:27.436705 | orchestrator | 2026-01-08 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:30.483711 | orchestrator | 2026-01-08 00:50:30 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:30.486411 | orchestrator | 2026-01-08 00:50:30 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:30.486563 | orchestrator | 2026-01-08 00:50:30 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:30.488865 | orchestrator | 2026-01-08 00:50:30 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:30.489993 | orchestrator | 2026-01-08 00:50:30 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:30.490159 | orchestrator | 2026-01-08 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:33.530938 | orchestrator | 2026-01-08 00:50:33 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:33.534512 | orchestrator | 2026-01-08 00:50:33 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:33.538847 | orchestrator | 2026-01-08 00:50:33 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:33.541644 | orchestrator | 2026-01-08 00:50:33 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:33.542656 | orchestrator | 2026-01-08 00:50:33 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:33.542703 | orchestrator | 2026-01-08 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:36.590370 | orchestrator | 2026-01-08 00:50:36 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:36.592313 | orchestrator | 2026-01-08 00:50:36 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:36.594642 | orchestrator | 2026-01-08 00:50:36 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:36.597126 | orchestrator | 2026-01-08 00:50:36 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:36.601765 | orchestrator | 2026-01-08 00:50:36 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:36.601847 | orchestrator | 2026-01-08 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:39.646713 | orchestrator | 2026-01-08 00:50:39 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:39.649301 | orchestrator | 2026-01-08 00:50:39 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:39.651687 | orchestrator | 2026-01-08 00:50:39 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:39.653357 | orchestrator | 2026-01-08 00:50:39 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:39.655277 | orchestrator | 2026-01-08 00:50:39 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:39.655346 | orchestrator | 2026-01-08 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:42.707737 | orchestrator | 2026-01-08 00:50:42 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:42.707787 | orchestrator | 2026-01-08 00:50:42 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:42.708313 | orchestrator | 2026-01-08 00:50:42 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:42.709057 | orchestrator | 2026-01-08 00:50:42 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:42.709837 | orchestrator | 2026-01-08 00:50:42 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:42.710148 | orchestrator | 2026-01-08 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:45.758543 | orchestrator | 2026-01-08 00:50:45 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:45.758940 | orchestrator | 2026-01-08 00:50:45 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:45.759991 | orchestrator | 2026-01-08 00:50:45 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:45.760573 | orchestrator | 2026-01-08 00:50:45 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:45.761284 | orchestrator | 2026-01-08 00:50:45 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:45.761298 | orchestrator | 2026-01-08 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:48.844850 | orchestrator | 2026-01-08 00:50:48 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:48.845202 | orchestrator | 2026-01-08 00:50:48 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:48.845934 | orchestrator | 2026-01-08 00:50:48 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:48.846562 | orchestrator | 2026-01-08 00:50:48 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:48.847106 | orchestrator | 2026-01-08 00:50:48 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:48.847124 | orchestrator | 2026-01-08 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:51.884519 | orchestrator | 2026-01-08 00:50:51 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:51.890413 | orchestrator | 2026-01-08 00:50:51 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:51.894488 | orchestrator | 2026-01-08 00:50:51 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:51.894916 | orchestrator | 2026-01-08 00:50:51 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:51.896986 | orchestrator | 2026-01-08 00:50:51 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:51.897021 | orchestrator | 2026-01-08 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:55.019624 | orchestrator | 2026-01-08 00:50:55 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:55.020234 | orchestrator | 2026-01-08 00:50:55 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:55.021168 | orchestrator | 2026-01-08 00:50:55 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:55.021860 | orchestrator | 2026-01-08 00:50:55 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:55.023882 | orchestrator | 2026-01-08 00:50:55 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:55.023981 | orchestrator | 2026-01-08 00:50:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:50:58.140062 | orchestrator | 2026-01-08 00:50:58 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:50:58.141017 | orchestrator | 2026-01-08 00:50:58 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:50:58.141681 | orchestrator | 2026-01-08 00:50:58 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:50:58.142461 | orchestrator | 2026-01-08 00:50:58 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:50:58.143070 | orchestrator | 2026-01-08 00:50:58 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:50:58.143095 | orchestrator | 2026-01-08 00:50:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:01.169774 | orchestrator | 2026-01-08 00:51:01 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:01.170198 | orchestrator | 2026-01-08 00:51:01 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:51:01.170808 | orchestrator | 2026-01-08 00:51:01 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:01.171495 | orchestrator | 2026-01-08 00:51:01 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:01.172082 | orchestrator | 2026-01-08 00:51:01 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:01.172107 | orchestrator | 2026-01-08 00:51:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:04.255840 | orchestrator | 2026-01-08 00:51:04 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:04.256463 | orchestrator | 2026-01-08 00:51:04 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state STARTED 2026-01-08 00:51:04.256500 | orchestrator | 2026-01-08 00:51:04 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:04.256508 | orchestrator | 2026-01-08 00:51:04 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:04.256519 | orchestrator | 2026-01-08 00:51:04 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:04.256527 | orchestrator | 2026-01-08 00:51:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:07.282786 | orchestrator | 2026-01-08 00:51:07 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:07.283515 | orchestrator | 2026-01-08 00:51:07 | INFO  | Task 9f8bdc52-18f0-45ea-add5-6dff8af31351 is in state SUCCESS 2026-01-08 00:51:07.284857 | orchestrator | 2026-01-08 00:51:07.284923 | orchestrator | 2026-01-08 00:51:07.284934 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-08 00:51:07.284940 | orchestrator | 2026-01-08 00:51:07.284944 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-08 00:51:07.284950 | orchestrator | Thursday 08 January 2026 00:46:30 +0000 (0:00:00.202) 0:00:00.202 ****** 2026-01-08 00:51:07.284954 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:51:07.284959 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:51:07.284963 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:51:07.284967 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.284971 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.284975 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.284979 | orchestrator | 2026-01-08 00:51:07.284983 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-08 00:51:07.284987 | orchestrator | Thursday 08 January 2026 00:46:31 +0000 (0:00:00.800) 0:00:01.002 ****** 2026-01-08 00:51:07.284992 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.284999 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.285005 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.285011 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.285017 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.285022 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.285028 | orchestrator | 2026-01-08 00:51:07.285033 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-08 00:51:07.285039 | orchestrator | Thursday 08 January 2026 00:46:32 +0000 (0:00:00.655) 0:00:01.658 ****** 2026-01-08 00:51:07.285045 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.285051 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.285057 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.285062 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.285069 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.285075 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.285081 | orchestrator | 2026-01-08 00:51:07.285087 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-08 00:51:07.285094 | orchestrator | Thursday 08 January 2026 00:46:33 +0000 (0:00:00.772) 0:00:02.430 ****** 2026-01-08 00:51:07.285099 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:51:07.285103 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:51:07.285107 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.285111 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.285115 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:51:07.285119 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.285123 | orchestrator | 2026-01-08 00:51:07.285220 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-08 00:51:07.285251 | orchestrator | Thursday 08 January 2026 00:46:34 +0000 (0:00:01.816) 0:00:04.247 ****** 2026-01-08 00:51:07.285255 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:51:07.285259 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:51:07.285263 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:51:07.285267 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.285358 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.285368 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.285374 | orchestrator | 2026-01-08 00:51:07.285380 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-08 00:51:07.285387 | orchestrator | Thursday 08 January 2026 00:46:36 +0000 (0:00:01.101) 0:00:05.348 ****** 2026-01-08 00:51:07.285393 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:51:07.285400 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:51:07.285406 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:51:07.285413 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.285503 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.285514 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.285521 | orchestrator | 2026-01-08 00:51:07.285528 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-08 00:51:07.285535 | orchestrator | Thursday 08 January 2026 00:46:36 +0000 (0:00:00.810) 0:00:06.159 ****** 2026-01-08 00:51:07.285542 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.285548 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.285555 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.285561 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.285568 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.285574 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.285580 | orchestrator | 2026-01-08 00:51:07.285587 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-08 00:51:07.285593 | orchestrator | Thursday 08 January 2026 00:46:37 +0000 (0:00:00.662) 0:00:06.821 ****** 2026-01-08 00:51:07.285600 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.285608 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.285615 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.285621 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.285627 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.285633 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.285640 | orchestrator | 2026-01-08 00:51:07.285646 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-08 00:51:07.285653 | orchestrator | Thursday 08 January 2026 00:46:38 +0000 (0:00:00.954) 0:00:07.775 ****** 2026-01-08 00:51:07.285659 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-08 00:51:07.285667 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-08 00:51:07.285674 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.285680 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-08 00:51:07.285701 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-08 00:51:07.285710 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.285716 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-08 00:51:07.285723 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-08 00:51:07.285729 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.285736 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-08 00:51:07.285754 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-08 00:51:07.285760 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.285768 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-08 00:51:07.285774 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-08 00:51:07.285779 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.285785 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-08 00:51:07.285792 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-08 00:51:07.285799 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.285807 | orchestrator | 2026-01-08 00:51:07.285813 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-08 00:51:07.285821 | orchestrator | Thursday 08 January 2026 00:46:39 +0000 (0:00:00.782) 0:00:08.558 ****** 2026-01-08 00:51:07.285829 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.285836 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.285842 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.285849 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.285857 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.285864 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.285878 | orchestrator | 2026-01-08 00:51:07.285886 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-08 00:51:07.285894 | orchestrator | Thursday 08 January 2026 00:46:40 +0000 (0:00:01.170) 0:00:09.729 ****** 2026-01-08 00:51:07.285901 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:51:07.285908 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:51:07.285914 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:51:07.285921 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.285927 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.285934 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.285940 | orchestrator | 2026-01-08 00:51:07.285948 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-08 00:51:07.285955 | orchestrator | Thursday 08 January 2026 00:46:41 +0000 (0:00:00.995) 0:00:10.724 ****** 2026-01-08 00:51:07.285962 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.285968 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:51:07.285975 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:51:07.285981 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.285988 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:51:07.285994 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.286000 | orchestrator | 2026-01-08 00:51:07.286008 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-08 00:51:07.286013 | orchestrator | Thursday 08 January 2026 00:46:47 +0000 (0:00:06.524) 0:00:17.249 ****** 2026-01-08 00:51:07.286054 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.286058 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.286062 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.286066 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.286070 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286074 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286078 | orchestrator | 2026-01-08 00:51:07.286082 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-08 00:51:07.286087 | orchestrator | Thursday 08 January 2026 00:46:49 +0000 (0:00:01.180) 0:00:18.430 ****** 2026-01-08 00:51:07.286090 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.286095 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.286098 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.286102 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.286106 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286110 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286114 | orchestrator | 2026-01-08 00:51:07.286118 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-08 00:51:07.286124 | orchestrator | Thursday 08 January 2026 00:46:50 +0000 (0:00:01.663) 0:00:20.093 ****** 2026-01-08 00:51:07.286142 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.286148 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.286152 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.286213 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.286238 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286244 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286249 | orchestrator | 2026-01-08 00:51:07.286254 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-08 00:51:07.286259 | orchestrator | Thursday 08 January 2026 00:46:51 +0000 (0:00:00.761) 0:00:20.855 ****** 2026-01-08 00:51:07.286264 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-08 00:51:07.286270 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-08 00:51:07.286275 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.286280 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-08 00:51:07.286284 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-08 00:51:07.286289 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.286294 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-08 00:51:07.286305 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-08 00:51:07.286310 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-08 00:51:07.286315 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-08 00:51:07.286320 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.286325 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-08 00:51:07.286335 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-08 00:51:07.286341 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.286345 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286350 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-08 00:51:07.286355 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-08 00:51:07.286359 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286364 | orchestrator | 2026-01-08 00:51:07.286368 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-08 00:51:07.286380 | orchestrator | Thursday 08 January 2026 00:46:52 +0000 (0:00:01.288) 0:00:22.143 ****** 2026-01-08 00:51:07.286385 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.286389 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.286393 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.286398 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.286402 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286406 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286410 | orchestrator | 2026-01-08 00:51:07.286414 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-08 00:51:07.286418 | orchestrator | Thursday 08 January 2026 00:46:54 +0000 (0:00:01.115) 0:00:23.258 ****** 2026-01-08 00:51:07.286422 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.286426 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.286430 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.286434 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.286438 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286442 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286446 | orchestrator | 2026-01-08 00:51:07.286450 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-08 00:51:07.286454 | orchestrator | 2026-01-08 00:51:07.286458 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-08 00:51:07.286464 | orchestrator | Thursday 08 January 2026 00:46:55 +0000 (0:00:01.800) 0:00:25.058 ****** 2026-01-08 00:51:07.286471 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.286477 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.286483 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.286489 | orchestrator | 2026-01-08 00:51:07.286496 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-08 00:51:07.286504 | orchestrator | Thursday 08 January 2026 00:46:57 +0000 (0:00:01.767) 0:00:26.826 ****** 2026-01-08 00:51:07.286513 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.286519 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.286525 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.286531 | orchestrator | 2026-01-08 00:51:07.286538 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-08 00:51:07.286545 | orchestrator | Thursday 08 January 2026 00:46:58 +0000 (0:00:01.343) 0:00:28.169 ****** 2026-01-08 00:51:07.286551 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.286557 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.286564 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.286571 | orchestrator | 2026-01-08 00:51:07.286575 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-08 00:51:07.286579 | orchestrator | Thursday 08 January 2026 00:46:59 +0000 (0:00:00.957) 0:00:29.126 ****** 2026-01-08 00:51:07.286583 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.286586 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.286590 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.286600 | orchestrator | 2026-01-08 00:51:07.286604 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-08 00:51:07.286608 | orchestrator | Thursday 08 January 2026 00:47:00 +0000 (0:00:00.596) 0:00:29.723 ****** 2026-01-08 00:51:07.286612 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.286616 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286620 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286624 | orchestrator | 2026-01-08 00:51:07.286627 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-08 00:51:07.286631 | orchestrator | Thursday 08 January 2026 00:47:00 +0000 (0:00:00.371) 0:00:30.094 ****** 2026-01-08 00:51:07.286635 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.286639 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.286643 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.286647 | orchestrator | 2026-01-08 00:51:07.286651 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-08 00:51:07.286655 | orchestrator | Thursday 08 January 2026 00:47:02 +0000 (0:00:01.223) 0:00:31.318 ****** 2026-01-08 00:51:07.286659 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.286663 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.286667 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.286671 | orchestrator | 2026-01-08 00:51:07.286675 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-08 00:51:07.286678 | orchestrator | Thursday 08 January 2026 00:47:03 +0000 (0:00:01.235) 0:00:32.553 ****** 2026-01-08 00:51:07.286682 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:51:07.286686 | orchestrator | 2026-01-08 00:51:07.286690 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-08 00:51:07.286694 | orchestrator | Thursday 08 January 2026 00:47:03 +0000 (0:00:00.552) 0:00:33.106 ****** 2026-01-08 00:51:07.286698 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.286702 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.286706 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.286710 | orchestrator | 2026-01-08 00:51:07.286714 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-08 00:51:07.286718 | orchestrator | Thursday 08 January 2026 00:47:06 +0000 (0:00:02.777) 0:00:35.883 ****** 2026-01-08 00:51:07.286722 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.286776 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286782 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286786 | orchestrator | 2026-01-08 00:51:07.286791 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-08 00:51:07.286795 | orchestrator | Thursday 08 January 2026 00:47:07 +0000 (0:00:00.571) 0:00:36.455 ****** 2026-01-08 00:51:07.286798 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286807 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286842 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.286847 | orchestrator | 2026-01-08 00:51:07.286851 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-08 00:51:07.286855 | orchestrator | Thursday 08 January 2026 00:47:08 +0000 (0:00:00.879) 0:00:37.334 ****** 2026-01-08 00:51:07.286859 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286863 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286867 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.286871 | orchestrator | 2026-01-08 00:51:07.286875 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-08 00:51:07.286884 | orchestrator | Thursday 08 January 2026 00:47:09 +0000 (0:00:01.395) 0:00:38.730 ****** 2026-01-08 00:51:07.286888 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.286892 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286896 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286900 | orchestrator | 2026-01-08 00:51:07.286904 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-08 00:51:07.286913 | orchestrator | Thursday 08 January 2026 00:47:10 +0000 (0:00:00.587) 0:00:39.317 ****** 2026-01-08 00:51:07.286923 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.286927 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.286931 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.286935 | orchestrator | 2026-01-08 00:51:07.286939 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-08 00:51:07.286943 | orchestrator | Thursday 08 January 2026 00:47:10 +0000 (0:00:00.335) 0:00:39.653 ****** 2026-01-08 00:51:07.286947 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.286951 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.286955 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.286959 | orchestrator | 2026-01-08 00:51:07.286962 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-08 00:51:07.286967 | orchestrator | Thursday 08 January 2026 00:47:11 +0000 (0:00:01.350) 0:00:41.003 ****** 2026-01-08 00:51:07.286970 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.286974 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.286978 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.286983 | orchestrator | 2026-01-08 00:51:07.286987 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-08 00:51:07.286991 | orchestrator | Thursday 08 January 2026 00:47:13 +0000 (0:00:02.154) 0:00:43.158 ****** 2026-01-08 00:51:07.286995 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.286998 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.287002 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.287006 | orchestrator | 2026-01-08 00:51:07.287010 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-08 00:51:07.287014 | orchestrator | Thursday 08 January 2026 00:47:14 +0000 (0:00:00.707) 0:00:43.865 ****** 2026-01-08 00:51:07.287018 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-08 00:51:07.287023 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-08 00:51:07.287027 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-08 00:51:07.287031 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-08 00:51:07.287035 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-08 00:51:07.287041 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-08 00:51:07.287048 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-08 00:51:07.287055 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-08 00:51:07.287061 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-08 00:51:07.287067 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-08 00:51:07.287073 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-08 00:51:07.287079 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-08 00:51:07.287086 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.287097 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.287103 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.287110 | orchestrator | 2026-01-08 00:51:07.287115 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-08 00:51:07.287121 | orchestrator | Thursday 08 January 2026 00:47:57 +0000 (0:00:43.322) 0:01:27.188 ****** 2026-01-08 00:51:07.287127 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.287133 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.287139 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.287146 | orchestrator | 2026-01-08 00:51:07.287152 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-08 00:51:07.287168 | orchestrator | Thursday 08 January 2026 00:47:58 +0000 (0:00:00.382) 0:01:27.570 ****** 2026-01-08 00:51:07.287175 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.287181 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.287187 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.287191 | orchestrator | 2026-01-08 00:51:07.287195 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-08 00:51:07.287199 | orchestrator | Thursday 08 January 2026 00:47:59 +0000 (0:00:01.491) 0:01:29.061 ****** 2026-01-08 00:51:07.287203 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.287207 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.287211 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.287215 | orchestrator | 2026-01-08 00:51:07.287244 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-08 00:51:07.287249 | orchestrator | Thursday 08 January 2026 00:48:01 +0000 (0:00:01.267) 0:01:30.329 ****** 2026-01-08 00:51:07.287253 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.287257 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.287261 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.287265 | orchestrator | 2026-01-08 00:51:07.287270 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-08 00:51:07.287274 | orchestrator | Thursday 08 January 2026 00:48:42 +0000 (0:00:41.712) 0:02:12.042 ****** 2026-01-08 00:51:07.287279 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.287283 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.287287 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.287291 | orchestrator | 2026-01-08 00:51:07.287296 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-08 00:51:07.287300 | orchestrator | Thursday 08 January 2026 00:48:43 +0000 (0:00:00.593) 0:02:12.635 ****** 2026-01-08 00:51:07.287304 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.287308 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.287311 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.287315 | orchestrator | 2026-01-08 00:51:07.287319 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-08 00:51:07.287323 | orchestrator | Thursday 08 January 2026 00:48:43 +0000 (0:00:00.545) 0:02:13.180 ****** 2026-01-08 00:51:07.287327 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.287331 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.287335 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.287339 | orchestrator | 2026-01-08 00:51:07.287343 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-08 00:51:07.287347 | orchestrator | Thursday 08 January 2026 00:48:44 +0000 (0:00:00.579) 0:02:13.760 ****** 2026-01-08 00:51:07.287351 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.287355 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.287359 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.287363 | orchestrator | 2026-01-08 00:51:07.287367 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-08 00:51:07.287371 | orchestrator | Thursday 08 January 2026 00:48:45 +0000 (0:00:00.860) 0:02:14.621 ****** 2026-01-08 00:51:07.287375 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.287379 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.287383 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.287391 | orchestrator | 2026-01-08 00:51:07.287396 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-08 00:51:07.287400 | orchestrator | Thursday 08 January 2026 00:48:45 +0000 (0:00:00.326) 0:02:14.947 ****** 2026-01-08 00:51:07.287404 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.287408 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.287412 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.287416 | orchestrator | 2026-01-08 00:51:07.287420 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-08 00:51:07.287424 | orchestrator | Thursday 08 January 2026 00:48:46 +0000 (0:00:00.606) 0:02:15.554 ****** 2026-01-08 00:51:07.287428 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.287431 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.287435 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.287439 | orchestrator | 2026-01-08 00:51:07.287443 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-08 00:51:07.287447 | orchestrator | Thursday 08 January 2026 00:48:46 +0000 (0:00:00.595) 0:02:16.150 ****** 2026-01-08 00:51:07.287451 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.287455 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.287459 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.287463 | orchestrator | 2026-01-08 00:51:07.287467 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-08 00:51:07.287471 | orchestrator | Thursday 08 January 2026 00:48:47 +0000 (0:00:01.091) 0:02:17.241 ****** 2026-01-08 00:51:07.287475 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:51:07.287479 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:51:07.287483 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:51:07.287487 | orchestrator | 2026-01-08 00:51:07.287492 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-08 00:51:07.287497 | orchestrator | Thursday 08 January 2026 00:48:48 +0000 (0:00:00.966) 0:02:18.208 ****** 2026-01-08 00:51:07.287501 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.287506 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.287510 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.287515 | orchestrator | 2026-01-08 00:51:07.287519 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-08 00:51:07.287524 | orchestrator | Thursday 08 January 2026 00:48:49 +0000 (0:00:00.319) 0:02:18.528 ****** 2026-01-08 00:51:07.287529 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.287534 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.287538 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.287543 | orchestrator | 2026-01-08 00:51:07.287547 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-08 00:51:07.287552 | orchestrator | Thursday 08 January 2026 00:48:49 +0000 (0:00:00.287) 0:02:18.815 ****** 2026-01-08 00:51:07.287557 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.287561 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.287565 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.287570 | orchestrator | 2026-01-08 00:51:07.287575 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-08 00:51:07.287580 | orchestrator | Thursday 08 January 2026 00:48:50 +0000 (0:00:00.931) 0:02:19.747 ****** 2026-01-08 00:51:07.287585 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.287592 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.287598 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.287602 | orchestrator | 2026-01-08 00:51:07.287607 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-08 00:51:07.287612 | orchestrator | Thursday 08 January 2026 00:48:51 +0000 (0:00:00.678) 0:02:20.425 ****** 2026-01-08 00:51:07.287616 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-08 00:51:07.287624 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-08 00:51:07.287636 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-08 00:51:07.287643 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-08 00:51:07.287652 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-08 00:51:07.287662 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-08 00:51:07.287668 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-08 00:51:07.287676 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-08 00:51:07.287683 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-08 00:51:07.287690 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-08 00:51:07.287697 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-08 00:51:07.287705 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-08 00:51:07.287712 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-08 00:51:07.287719 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-08 00:51:07.287727 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-08 00:51:07.287734 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-08 00:51:07.287741 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-08 00:51:07.287749 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-08 00:51:07.287756 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-08 00:51:07.287760 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-08 00:51:07.287765 | orchestrator | 2026-01-08 00:51:07.287770 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-08 00:51:07.287774 | orchestrator | 2026-01-08 00:51:07.287779 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-08 00:51:07.287784 | orchestrator | Thursday 08 January 2026 00:48:54 +0000 (0:00:03.561) 0:02:23.987 ****** 2026-01-08 00:51:07.287788 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:51:07.287793 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:51:07.287798 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:51:07.287802 | orchestrator | 2026-01-08 00:51:07.287807 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-08 00:51:07.287812 | orchestrator | Thursday 08 January 2026 00:48:55 +0000 (0:00:00.515) 0:02:24.502 ****** 2026-01-08 00:51:07.287816 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:51:07.287821 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:51:07.287825 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:51:07.287830 | orchestrator | 2026-01-08 00:51:07.287835 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-08 00:51:07.287839 | orchestrator | Thursday 08 January 2026 00:48:55 +0000 (0:00:00.612) 0:02:25.115 ****** 2026-01-08 00:51:07.287844 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:51:07.287849 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:51:07.287853 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:51:07.287858 | orchestrator | 2026-01-08 00:51:07.287862 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-08 00:51:07.287866 | orchestrator | Thursday 08 January 2026 00:48:56 +0000 (0:00:00.325) 0:02:25.441 ****** 2026-01-08 00:51:07.287870 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:51:07.287881 | orchestrator | 2026-01-08 00:51:07.287885 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-08 00:51:07.287889 | orchestrator | Thursday 08 January 2026 00:48:56 +0000 (0:00:00.672) 0:02:26.113 ****** 2026-01-08 00:51:07.287893 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.287897 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.287901 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.287904 | orchestrator | 2026-01-08 00:51:07.287908 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-08 00:51:07.287912 | orchestrator | Thursday 08 January 2026 00:48:57 +0000 (0:00:00.312) 0:02:26.425 ****** 2026-01-08 00:51:07.287916 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.287920 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.287924 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.287928 | orchestrator | 2026-01-08 00:51:07.287932 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-08 00:51:07.287939 | orchestrator | Thursday 08 January 2026 00:48:57 +0000 (0:00:00.300) 0:02:26.726 ****** 2026-01-08 00:51:07.287943 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.287947 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.287951 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.287955 | orchestrator | 2026-01-08 00:51:07.287959 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-08 00:51:07.287963 | orchestrator | Thursday 08 January 2026 00:48:57 +0000 (0:00:00.283) 0:02:27.010 ****** 2026-01-08 00:51:07.287967 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:51:07.287971 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:51:07.287975 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:51:07.287979 | orchestrator | 2026-01-08 00:51:07.287988 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-08 00:51:07.287992 | orchestrator | Thursday 08 January 2026 00:48:58 +0000 (0:00:00.860) 0:02:27.870 ****** 2026-01-08 00:51:07.287996 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:51:07.287999 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:51:07.288003 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:51:07.288007 | orchestrator | 2026-01-08 00:51:07.288011 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-08 00:51:07.288015 | orchestrator | Thursday 08 January 2026 00:48:59 +0000 (0:00:01.222) 0:02:29.093 ****** 2026-01-08 00:51:07.288019 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:51:07.288023 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:51:07.288027 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:51:07.288030 | orchestrator | 2026-01-08 00:51:07.288034 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-08 00:51:07.288038 | orchestrator | Thursday 08 January 2026 00:49:01 +0000 (0:00:01.387) 0:02:30.481 ****** 2026-01-08 00:51:07.288042 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:51:07.288046 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:51:07.288050 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:51:07.288054 | orchestrator | 2026-01-08 00:51:07.288058 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-08 00:51:07.288062 | orchestrator | 2026-01-08 00:51:07.288066 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-08 00:51:07.288070 | orchestrator | Thursday 08 January 2026 00:49:11 +0000 (0:00:10.007) 0:02:40.488 ****** 2026-01-08 00:51:07.288074 | orchestrator | ok: [testbed-manager] 2026-01-08 00:51:07.288078 | orchestrator | 2026-01-08 00:51:07.288082 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-08 00:51:07.288086 | orchestrator | Thursday 08 January 2026 00:49:12 +0000 (0:00:00.922) 0:02:41.410 ****** 2026-01-08 00:51:07.288090 | orchestrator | changed: [testbed-manager] 2026-01-08 00:51:07.288093 | orchestrator | 2026-01-08 00:51:07.288097 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-08 00:51:07.288107 | orchestrator | Thursday 08 January 2026 00:49:12 +0000 (0:00:00.393) 0:02:41.804 ****** 2026-01-08 00:51:07.288111 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-08 00:51:07.288115 | orchestrator | 2026-01-08 00:51:07.288119 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-08 00:51:07.288123 | orchestrator | Thursday 08 January 2026 00:49:13 +0000 (0:00:00.499) 0:02:42.303 ****** 2026-01-08 00:51:07.288127 | orchestrator | changed: [testbed-manager] 2026-01-08 00:51:07.288131 | orchestrator | 2026-01-08 00:51:07.288135 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-08 00:51:07.288138 | orchestrator | Thursday 08 January 2026 00:49:13 +0000 (0:00:00.768) 0:02:43.071 ****** 2026-01-08 00:51:07.288142 | orchestrator | changed: [testbed-manager] 2026-01-08 00:51:07.288146 | orchestrator | 2026-01-08 00:51:07.288150 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-08 00:51:07.288154 | orchestrator | Thursday 08 January 2026 00:49:14 +0000 (0:00:00.559) 0:02:43.631 ****** 2026-01-08 00:51:07.288158 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-08 00:51:07.288161 | orchestrator | 2026-01-08 00:51:07.288166 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-08 00:51:07.288170 | orchestrator | Thursday 08 January 2026 00:49:15 +0000 (0:00:01.603) 0:02:45.235 ****** 2026-01-08 00:51:07.288174 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-08 00:51:07.288178 | orchestrator | 2026-01-08 00:51:07.288181 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-08 00:51:07.288185 | orchestrator | Thursday 08 January 2026 00:49:16 +0000 (0:00:00.877) 0:02:46.112 ****** 2026-01-08 00:51:07.288189 | orchestrator | changed: [testbed-manager] 2026-01-08 00:51:07.288193 | orchestrator | 2026-01-08 00:51:07.288197 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-08 00:51:07.288202 | orchestrator | Thursday 08 January 2026 00:49:17 +0000 (0:00:00.412) 0:02:46.525 ****** 2026-01-08 00:51:07.288206 | orchestrator | changed: [testbed-manager] 2026-01-08 00:51:07.288209 | orchestrator | 2026-01-08 00:51:07.288213 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-08 00:51:07.288217 | orchestrator | 2026-01-08 00:51:07.288354 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-08 00:51:07.288361 | orchestrator | Thursday 08 January 2026 00:49:17 +0000 (0:00:00.622) 0:02:47.147 ****** 2026-01-08 00:51:07.288365 | orchestrator | ok: [testbed-manager] 2026-01-08 00:51:07.288369 | orchestrator | 2026-01-08 00:51:07.288373 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-08 00:51:07.288377 | orchestrator | Thursday 08 January 2026 00:49:18 +0000 (0:00:00.111) 0:02:47.258 ****** 2026-01-08 00:51:07.288381 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-08 00:51:07.288385 | orchestrator | 2026-01-08 00:51:07.288389 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-08 00:51:07.288393 | orchestrator | Thursday 08 January 2026 00:49:18 +0000 (0:00:00.217) 0:02:47.476 ****** 2026-01-08 00:51:07.288397 | orchestrator | ok: [testbed-manager] 2026-01-08 00:51:07.288401 | orchestrator | 2026-01-08 00:51:07.288405 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-08 00:51:07.288409 | orchestrator | Thursday 08 January 2026 00:49:19 +0000 (0:00:00.912) 0:02:48.388 ****** 2026-01-08 00:51:07.288416 | orchestrator | ok: [testbed-manager] 2026-01-08 00:51:07.288421 | orchestrator | 2026-01-08 00:51:07.288425 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-08 00:51:07.288429 | orchestrator | Thursday 08 January 2026 00:49:20 +0000 (0:00:01.599) 0:02:49.988 ****** 2026-01-08 00:51:07.288432 | orchestrator | changed: [testbed-manager] 2026-01-08 00:51:07.288436 | orchestrator | 2026-01-08 00:51:07.288440 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-08 00:51:07.288445 | orchestrator | Thursday 08 January 2026 00:49:21 +0000 (0:00:00.925) 0:02:50.913 ****** 2026-01-08 00:51:07.288455 | orchestrator | ok: [testbed-manager] 2026-01-08 00:51:07.288459 | orchestrator | 2026-01-08 00:51:07.288468 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-08 00:51:07.288472 | orchestrator | Thursday 08 January 2026 00:49:22 +0000 (0:00:00.625) 0:02:51.539 ****** 2026-01-08 00:51:07.288476 | orchestrator | changed: [testbed-manager] 2026-01-08 00:51:07.288480 | orchestrator | 2026-01-08 00:51:07.288484 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-08 00:51:07.288488 | orchestrator | Thursday 08 January 2026 00:49:30 +0000 (0:00:08.033) 0:02:59.572 ****** 2026-01-08 00:51:07.288492 | orchestrator | changed: [testbed-manager] 2026-01-08 00:51:07.288496 | orchestrator | 2026-01-08 00:51:07.288500 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-08 00:51:07.288504 | orchestrator | Thursday 08 January 2026 00:49:43 +0000 (0:00:13.357) 0:03:12.930 ****** 2026-01-08 00:51:07.288508 | orchestrator | ok: [testbed-manager] 2026-01-08 00:51:07.288512 | orchestrator | 2026-01-08 00:51:07.288516 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-08 00:51:07.288520 | orchestrator | 2026-01-08 00:51:07.288524 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-08 00:51:07.288528 | orchestrator | Thursday 08 January 2026 00:49:44 +0000 (0:00:00.567) 0:03:13.498 ****** 2026-01-08 00:51:07.288532 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.288536 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.288540 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.288543 | orchestrator | 2026-01-08 00:51:07.288547 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-08 00:51:07.288551 | orchestrator | Thursday 08 January 2026 00:49:44 +0000 (0:00:00.406) 0:03:13.904 ****** 2026-01-08 00:51:07.288555 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.288559 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.288563 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.288567 | orchestrator | 2026-01-08 00:51:07.288570 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-08 00:51:07.288574 | orchestrator | Thursday 08 January 2026 00:49:45 +0000 (0:00:00.409) 0:03:14.314 ****** 2026-01-08 00:51:07.288578 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:51:07.288582 | orchestrator | 2026-01-08 00:51:07.288586 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-08 00:51:07.288590 | orchestrator | Thursday 08 January 2026 00:49:46 +0000 (0:00:01.100) 0:03:15.414 ****** 2026-01-08 00:51:07.288594 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-08 00:51:07.288598 | orchestrator | 2026-01-08 00:51:07.288602 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-08 00:51:07.288606 | orchestrator | Thursday 08 January 2026 00:49:47 +0000 (0:00:01.047) 0:03:16.462 ****** 2026-01-08 00:51:07.288610 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 00:51:07.288614 | orchestrator | 2026-01-08 00:51:07.288618 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-08 00:51:07.288622 | orchestrator | Thursday 08 January 2026 00:49:48 +0000 (0:00:00.812) 0:03:17.274 ****** 2026-01-08 00:51:07.288625 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.288629 | orchestrator | 2026-01-08 00:51:07.288633 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-08 00:51:07.288637 | orchestrator | Thursday 08 January 2026 00:49:48 +0000 (0:00:00.098) 0:03:17.373 ****** 2026-01-08 00:51:07.288641 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 00:51:07.288645 | orchestrator | 2026-01-08 00:51:07.288649 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-08 00:51:07.288653 | orchestrator | Thursday 08 January 2026 00:49:49 +0000 (0:00:00.962) 0:03:18.335 ****** 2026-01-08 00:51:07.288657 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.288666 | orchestrator | 2026-01-08 00:51:07.288670 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-08 00:51:07.288674 | orchestrator | Thursday 08 January 2026 00:49:49 +0000 (0:00:00.224) 0:03:18.560 ****** 2026-01-08 00:51:07.288677 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.288681 | orchestrator | 2026-01-08 00:51:07.288685 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-08 00:51:07.288689 | orchestrator | Thursday 08 January 2026 00:49:49 +0000 (0:00:00.141) 0:03:18.701 ****** 2026-01-08 00:51:07.288693 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.288697 | orchestrator | 2026-01-08 00:51:07.288701 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-08 00:51:07.288705 | orchestrator | Thursday 08 January 2026 00:49:49 +0000 (0:00:00.186) 0:03:18.888 ****** 2026-01-08 00:51:07.288709 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.288713 | orchestrator | 2026-01-08 00:51:07.288717 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-08 00:51:07.288720 | orchestrator | Thursday 08 January 2026 00:49:49 +0000 (0:00:00.135) 0:03:19.023 ****** 2026-01-08 00:51:07.288725 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-08 00:51:07.288728 | orchestrator | 2026-01-08 00:51:07.288732 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-08 00:51:07.288736 | orchestrator | Thursday 08 January 2026 00:49:55 +0000 (0:00:05.697) 0:03:24.721 ****** 2026-01-08 00:51:07.288741 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-08 00:51:07.288747 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-08 00:51:07.288752 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-08 00:51:07.288756 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-08 00:51:07.288760 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-08 00:51:07.288764 | orchestrator | 2026-01-08 00:51:07.288768 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-08 00:51:07.288771 | orchestrator | Thursday 08 January 2026 00:50:38 +0000 (0:00:42.647) 0:04:07.369 ****** 2026-01-08 00:51:07.288779 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 00:51:07.288783 | orchestrator | 2026-01-08 00:51:07.288787 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-08 00:51:07.288791 | orchestrator | Thursday 08 January 2026 00:50:39 +0000 (0:00:01.181) 0:04:08.550 ****** 2026-01-08 00:51:07.288795 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-08 00:51:07.288799 | orchestrator | 2026-01-08 00:51:07.288803 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-08 00:51:07.288807 | orchestrator | Thursday 08 January 2026 00:50:40 +0000 (0:00:01.695) 0:04:10.246 ****** 2026-01-08 00:51:07.288811 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-08 00:51:07.288815 | orchestrator | 2026-01-08 00:51:07.288819 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-08 00:51:07.288823 | orchestrator | Thursday 08 January 2026 00:50:42 +0000 (0:00:01.311) 0:04:11.558 ****** 2026-01-08 00:51:07.288827 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.288831 | orchestrator | 2026-01-08 00:51:07.288834 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-08 00:51:07.288838 | orchestrator | Thursday 08 January 2026 00:50:42 +0000 (0:00:00.163) 0:04:11.721 ****** 2026-01-08 00:51:07.288842 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-08 00:51:07.288846 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-08 00:51:07.288850 | orchestrator | 2026-01-08 00:51:07.288854 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-08 00:51:07.288862 | orchestrator | Thursday 08 January 2026 00:50:44 +0000 (0:00:02.027) 0:04:13.748 ****** 2026-01-08 00:51:07.288866 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.288870 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.288874 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.288877 | orchestrator | 2026-01-08 00:51:07.288881 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-08 00:51:07.288886 | orchestrator | Thursday 08 January 2026 00:50:44 +0000 (0:00:00.397) 0:04:14.146 ****** 2026-01-08 00:51:07.288889 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.288893 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.288897 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.288901 | orchestrator | 2026-01-08 00:51:07.288905 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-08 00:51:07.288909 | orchestrator | 2026-01-08 00:51:07.288913 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-08 00:51:07.288916 | orchestrator | Thursday 08 January 2026 00:50:46 +0000 (0:00:01.161) 0:04:15.308 ****** 2026-01-08 00:51:07.288921 | orchestrator | ok: [testbed-manager] 2026-01-08 00:51:07.288925 | orchestrator | 2026-01-08 00:51:07.288928 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-08 00:51:07.288932 | orchestrator | Thursday 08 January 2026 00:50:46 +0000 (0:00:00.126) 0:04:15.434 ****** 2026-01-08 00:51:07.288936 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-08 00:51:07.288940 | orchestrator | 2026-01-08 00:51:07.288944 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-08 00:51:07.288951 | orchestrator | Thursday 08 January 2026 00:50:46 +0000 (0:00:00.204) 0:04:15.639 ****** 2026-01-08 00:51:07.288957 | orchestrator | changed: [testbed-manager] 2026-01-08 00:51:07.288963 | orchestrator | 2026-01-08 00:51:07.288969 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-08 00:51:07.288975 | orchestrator | 2026-01-08 00:51:07.288982 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-08 00:51:07.288987 | orchestrator | Thursday 08 January 2026 00:50:51 +0000 (0:00:05.280) 0:04:20.920 ****** 2026-01-08 00:51:07.288993 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:51:07.289000 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:51:07.289005 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:51:07.289011 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:51:07.289017 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:51:07.289023 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:51:07.289029 | orchestrator | 2026-01-08 00:51:07.289036 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-08 00:51:07.289043 | orchestrator | Thursday 08 January 2026 00:50:52 +0000 (0:00:00.985) 0:04:21.906 ****** 2026-01-08 00:51:07.289049 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-08 00:51:07.289055 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-08 00:51:07.289062 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-08 00:51:07.289066 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-08 00:51:07.289070 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-08 00:51:07.289074 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-08 00:51:07.289078 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-08 00:51:07.289082 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-08 00:51:07.289090 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-08 00:51:07.289094 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-08 00:51:07.289102 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-08 00:51:07.289106 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-08 00:51:07.289115 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-08 00:51:07.289119 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-08 00:51:07.289122 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-08 00:51:07.289126 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-08 00:51:07.289130 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-08 00:51:07.289134 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-08 00:51:07.289138 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-08 00:51:07.289141 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-08 00:51:07.289145 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-08 00:51:07.289149 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-08 00:51:07.289153 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-08 00:51:07.289157 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-08 00:51:07.289161 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-08 00:51:07.289165 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-08 00:51:07.289169 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-08 00:51:07.289172 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-08 00:51:07.289176 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-08 00:51:07.289180 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-08 00:51:07.289184 | orchestrator | 2026-01-08 00:51:07.289188 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-08 00:51:07.289192 | orchestrator | Thursday 08 January 2026 00:51:03 +0000 (0:00:11.018) 0:04:32.924 ****** 2026-01-08 00:51:07.289196 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.289200 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.289204 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.289208 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.289212 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.289215 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.289234 | orchestrator | 2026-01-08 00:51:07.289239 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-08 00:51:07.289243 | orchestrator | Thursday 08 January 2026 00:51:04 +0000 (0:00:00.754) 0:04:33.679 ****** 2026-01-08 00:51:07.289247 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:51:07.289251 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:51:07.289255 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:51:07.289258 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:51:07.289262 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:51:07.289266 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:51:07.289270 | orchestrator | 2026-01-08 00:51:07.289274 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:51:07.289278 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:51:07.289284 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-08 00:51:07.289293 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-08 00:51:07.289298 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-08 00:51:07.289302 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-08 00:51:07.289306 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-08 00:51:07.289309 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-08 00:51:07.289313 | orchestrator | 2026-01-08 00:51:07.289317 | orchestrator | 2026-01-08 00:51:07.289321 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:51:07.289330 | orchestrator | Thursday 08 January 2026 00:51:04 +0000 (0:00:00.406) 0:04:34.086 ****** 2026-01-08 00:51:07.289334 | orchestrator | =============================================================================== 2026-01-08 00:51:07.289338 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.32s 2026-01-08 00:51:07.289342 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.65s 2026-01-08 00:51:07.289346 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 41.71s 2026-01-08 00:51:07.289356 | orchestrator | kubectl : Install required packages ------------------------------------ 13.36s 2026-01-08 00:51:07.289362 | orchestrator | Manage labels ---------------------------------------------------------- 11.02s 2026-01-08 00:51:07.289369 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.01s 2026-01-08 00:51:07.289376 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.03s 2026-01-08 00:51:07.289383 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.52s 2026-01-08 00:51:07.289389 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.70s 2026-01-08 00:51:07.289396 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.28s 2026-01-08 00:51:07.289404 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.56s 2026-01-08 00:51:07.289410 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.78s 2026-01-08 00:51:07.289418 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.15s 2026-01-08 00:51:07.289425 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.03s 2026-01-08 00:51:07.289432 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.82s 2026-01-08 00:51:07.289439 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.80s 2026-01-08 00:51:07.289445 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.77s 2026-01-08 00:51:07.289449 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.70s 2026-01-08 00:51:07.289453 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.66s 2026-01-08 00:51:07.289457 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.60s 2026-01-08 00:51:07.289461 | orchestrator | 2026-01-08 00:51:07 | INFO  | Task 8832393d-ba49-4662-bfe4-42c3ee209856 is in state STARTED 2026-01-08 00:51:07.289465 | orchestrator | 2026-01-08 00:51:07 | INFO  | Task 85717afb-f8b3-4235-8329-c032a59d83d9 is in state STARTED 2026-01-08 00:51:07.289469 | orchestrator | 2026-01-08 00:51:07 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:07.289478 | orchestrator | 2026-01-08 00:51:07 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:07.289482 | orchestrator | 2026-01-08 00:51:07 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:07.289486 | orchestrator | 2026-01-08 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:10.339675 | orchestrator | 2026-01-08 00:51:10 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:10.342775 | orchestrator | 2026-01-08 00:51:10 | INFO  | Task 8832393d-ba49-4662-bfe4-42c3ee209856 is in state STARTED 2026-01-08 00:51:10.344530 | orchestrator | 2026-01-08 00:51:10 | INFO  | Task 85717afb-f8b3-4235-8329-c032a59d83d9 is in state STARTED 2026-01-08 00:51:10.346275 | orchestrator | 2026-01-08 00:51:10 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:10.348432 | orchestrator | 2026-01-08 00:51:10 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:10.348983 | orchestrator | 2026-01-08 00:51:10 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:10.349039 | orchestrator | 2026-01-08 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:13.379872 | orchestrator | 2026-01-08 00:51:13 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:13.380853 | orchestrator | 2026-01-08 00:51:13 | INFO  | Task 8832393d-ba49-4662-bfe4-42c3ee209856 is in state STARTED 2026-01-08 00:51:13.381345 | orchestrator | 2026-01-08 00:51:13 | INFO  | Task 85717afb-f8b3-4235-8329-c032a59d83d9 is in state SUCCESS 2026-01-08 00:51:13.382327 | orchestrator | 2026-01-08 00:51:13 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:13.383413 | orchestrator | 2026-01-08 00:51:13 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:13.384307 | orchestrator | 2026-01-08 00:51:13 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:13.384362 | orchestrator | 2026-01-08 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:16.475594 | orchestrator | 2026-01-08 00:51:16 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:16.475671 | orchestrator | 2026-01-08 00:51:16 | INFO  | Task 8832393d-ba49-4662-bfe4-42c3ee209856 is in state STARTED 2026-01-08 00:51:16.475680 | orchestrator | 2026-01-08 00:51:16 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:16.475687 | orchestrator | 2026-01-08 00:51:16 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:16.475693 | orchestrator | 2026-01-08 00:51:16 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:16.475701 | orchestrator | 2026-01-08 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:19.497271 | orchestrator | 2026-01-08 00:51:19 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:19.497449 | orchestrator | 2026-01-08 00:51:19 | INFO  | Task 8832393d-ba49-4662-bfe4-42c3ee209856 is in state SUCCESS 2026-01-08 00:51:19.498329 | orchestrator | 2026-01-08 00:51:19 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:19.499045 | orchestrator | 2026-01-08 00:51:19 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:19.499847 | orchestrator | 2026-01-08 00:51:19 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:19.499960 | orchestrator | 2026-01-08 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:22.522105 | orchestrator | 2026-01-08 00:51:22 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:22.523348 | orchestrator | 2026-01-08 00:51:22 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:22.524967 | orchestrator | 2026-01-08 00:51:22 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:22.525859 | orchestrator | 2026-01-08 00:51:22 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:22.526065 | orchestrator | 2026-01-08 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:25.556680 | orchestrator | 2026-01-08 00:51:25 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:25.557548 | orchestrator | 2026-01-08 00:51:25 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:25.558624 | orchestrator | 2026-01-08 00:51:25 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:25.559460 | orchestrator | 2026-01-08 00:51:25 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:25.559496 | orchestrator | 2026-01-08 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:28.603255 | orchestrator | 2026-01-08 00:51:28 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:28.608138 | orchestrator | 2026-01-08 00:51:28 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:28.610679 | orchestrator | 2026-01-08 00:51:28 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:28.614888 | orchestrator | 2026-01-08 00:51:28 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:28.614973 | orchestrator | 2026-01-08 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:31.646200 | orchestrator | 2026-01-08 00:51:31 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:31.649156 | orchestrator | 2026-01-08 00:51:31 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:31.654286 | orchestrator | 2026-01-08 00:51:31 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:31.664311 | orchestrator | 2026-01-08 00:51:31 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:31.664361 | orchestrator | 2026-01-08 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:34.712592 | orchestrator | 2026-01-08 00:51:34 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:34.714558 | orchestrator | 2026-01-08 00:51:34 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:34.717043 | orchestrator | 2026-01-08 00:51:34 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:34.719963 | orchestrator | 2026-01-08 00:51:34 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:34.720023 | orchestrator | 2026-01-08 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:37.763939 | orchestrator | 2026-01-08 00:51:37 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:37.766825 | orchestrator | 2026-01-08 00:51:37 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:37.769031 | orchestrator | 2026-01-08 00:51:37 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:37.771023 | orchestrator | 2026-01-08 00:51:37 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:37.771402 | orchestrator | 2026-01-08 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:40.813713 | orchestrator | 2026-01-08 00:51:40 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:40.814593 | orchestrator | 2026-01-08 00:51:40 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:40.818889 | orchestrator | 2026-01-08 00:51:40 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:40.820251 | orchestrator | 2026-01-08 00:51:40 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:40.820302 | orchestrator | 2026-01-08 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:43.866578 | orchestrator | 2026-01-08 00:51:43 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:43.870716 | orchestrator | 2026-01-08 00:51:43 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:43.871970 | orchestrator | 2026-01-08 00:51:43 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:43.875663 | orchestrator | 2026-01-08 00:51:43 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:43.875722 | orchestrator | 2026-01-08 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:46.910222 | orchestrator | 2026-01-08 00:51:46 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:46.911950 | orchestrator | 2026-01-08 00:51:46 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:46.914201 | orchestrator | 2026-01-08 00:51:46 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:46.915299 | orchestrator | 2026-01-08 00:51:46 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:46.915649 | orchestrator | 2026-01-08 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:49.958397 | orchestrator | 2026-01-08 00:51:49 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:49.959311 | orchestrator | 2026-01-08 00:51:49 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:49.960478 | orchestrator | 2026-01-08 00:51:49 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:49.961863 | orchestrator | 2026-01-08 00:51:49 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:49.962113 | orchestrator | 2026-01-08 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:53.002753 | orchestrator | 2026-01-08 00:51:53 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:53.003170 | orchestrator | 2026-01-08 00:51:53 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:53.003977 | orchestrator | 2026-01-08 00:51:53 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:53.005874 | orchestrator | 2026-01-08 00:51:53 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:53.005914 | orchestrator | 2026-01-08 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:56.030137 | orchestrator | 2026-01-08 00:51:56 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:56.030899 | orchestrator | 2026-01-08 00:51:56 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:56.031779 | orchestrator | 2026-01-08 00:51:56 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:56.032915 | orchestrator | 2026-01-08 00:51:56 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:56.033143 | orchestrator | 2026-01-08 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:51:59.071554 | orchestrator | 2026-01-08 00:51:59 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:51:59.073069 | orchestrator | 2026-01-08 00:51:59 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:51:59.074471 | orchestrator | 2026-01-08 00:51:59 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:51:59.075773 | orchestrator | 2026-01-08 00:51:59 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:51:59.075826 | orchestrator | 2026-01-08 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:02.108776 | orchestrator | 2026-01-08 00:52:02 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:02.109760 | orchestrator | 2026-01-08 00:52:02 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:02.110880 | orchestrator | 2026-01-08 00:52:02 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:02.114414 | orchestrator | 2026-01-08 00:52:02 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:02.114521 | orchestrator | 2026-01-08 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:05.150225 | orchestrator | 2026-01-08 00:52:05 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:05.151297 | orchestrator | 2026-01-08 00:52:05 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:05.151625 | orchestrator | 2026-01-08 00:52:05 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:05.152450 | orchestrator | 2026-01-08 00:52:05 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:05.152477 | orchestrator | 2026-01-08 00:52:05 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:08.185543 | orchestrator | 2026-01-08 00:52:08 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:08.187308 | orchestrator | 2026-01-08 00:52:08 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:08.188211 | orchestrator | 2026-01-08 00:52:08 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:08.190187 | orchestrator | 2026-01-08 00:52:08 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:08.190251 | orchestrator | 2026-01-08 00:52:08 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:11.220019 | orchestrator | 2026-01-08 00:52:11 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:11.220368 | orchestrator | 2026-01-08 00:52:11 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:11.222061 | orchestrator | 2026-01-08 00:52:11 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:11.224800 | orchestrator | 2026-01-08 00:52:11 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:11.224929 | orchestrator | 2026-01-08 00:52:11 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:14.258891 | orchestrator | 2026-01-08 00:52:14 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:14.260020 | orchestrator | 2026-01-08 00:52:14 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:14.260683 | orchestrator | 2026-01-08 00:52:14 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:14.261207 | orchestrator | 2026-01-08 00:52:14 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:14.261271 | orchestrator | 2026-01-08 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:17.290258 | orchestrator | 2026-01-08 00:52:17 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:17.290585 | orchestrator | 2026-01-08 00:52:17 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:17.291423 | orchestrator | 2026-01-08 00:52:17 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:17.292171 | orchestrator | 2026-01-08 00:52:17 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:17.292201 | orchestrator | 2026-01-08 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:20.318811 | orchestrator | 2026-01-08 00:52:20 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:20.320266 | orchestrator | 2026-01-08 00:52:20 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:20.321330 | orchestrator | 2026-01-08 00:52:20 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:20.326328 | orchestrator | 2026-01-08 00:52:20 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:20.326376 | orchestrator | 2026-01-08 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:23.354314 | orchestrator | 2026-01-08 00:52:23 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:23.354374 | orchestrator | 2026-01-08 00:52:23 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:23.356757 | orchestrator | 2026-01-08 00:52:23 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:23.358977 | orchestrator | 2026-01-08 00:52:23 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:23.359066 | orchestrator | 2026-01-08 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:26.401721 | orchestrator | 2026-01-08 00:52:26 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:26.403331 | orchestrator | 2026-01-08 00:52:26 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:26.404835 | orchestrator | 2026-01-08 00:52:26 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:26.406579 | orchestrator | 2026-01-08 00:52:26 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:26.406621 | orchestrator | 2026-01-08 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:29.449277 | orchestrator | 2026-01-08 00:52:29 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:29.451535 | orchestrator | 2026-01-08 00:52:29 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:29.451871 | orchestrator | 2026-01-08 00:52:29 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:29.452878 | orchestrator | 2026-01-08 00:52:29 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:29.452910 | orchestrator | 2026-01-08 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:32.486279 | orchestrator | 2026-01-08 00:52:32 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:32.486704 | orchestrator | 2026-01-08 00:52:32 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:32.489141 | orchestrator | 2026-01-08 00:52:32 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:32.490410 | orchestrator | 2026-01-08 00:52:32 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:32.490469 | orchestrator | 2026-01-08 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:35.522283 | orchestrator | 2026-01-08 00:52:35 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:35.523168 | orchestrator | 2026-01-08 00:52:35 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:35.524659 | orchestrator | 2026-01-08 00:52:35 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state STARTED 2026-01-08 00:52:35.525451 | orchestrator | 2026-01-08 00:52:35 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:35.525571 | orchestrator | 2026-01-08 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:38.556827 | orchestrator | 2026-01-08 00:52:38 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:38.558705 | orchestrator | 2026-01-08 00:52:38 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:38.561268 | orchestrator | 2026-01-08 00:52:38 | INFO  | Task 1c7dba5c-f1ca-4be3-9d53-3b842275b883 is in state SUCCESS 2026-01-08 00:52:38.562329 | orchestrator | 2026-01-08 00:52:38.562356 | orchestrator | 2026-01-08 00:52:38.562361 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-08 00:52:38.562367 | orchestrator | 2026-01-08 00:52:38.562372 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-08 00:52:38.562386 | orchestrator | Thursday 08 January 2026 00:51:09 +0000 (0:00:00.160) 0:00:00.160 ****** 2026-01-08 00:52:38.562392 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-08 00:52:38.562397 | orchestrator | 2026-01-08 00:52:38.562402 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-08 00:52:38.562407 | orchestrator | Thursday 08 January 2026 00:51:10 +0000 (0:00:00.817) 0:00:00.978 ****** 2026-01-08 00:52:38.562411 | orchestrator | changed: [testbed-manager] 2026-01-08 00:52:38.562416 | orchestrator | 2026-01-08 00:52:38.562421 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-08 00:52:38.562426 | orchestrator | Thursday 08 January 2026 00:51:11 +0000 (0:00:01.252) 0:00:02.231 ****** 2026-01-08 00:52:38.562430 | orchestrator | changed: [testbed-manager] 2026-01-08 00:52:38.562435 | orchestrator | 2026-01-08 00:52:38.562440 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:52:38.562445 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:52:38.562450 | orchestrator | 2026-01-08 00:52:38.562455 | orchestrator | 2026-01-08 00:52:38.562460 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:52:38.562464 | orchestrator | Thursday 08 January 2026 00:51:12 +0000 (0:00:00.481) 0:00:02.712 ****** 2026-01-08 00:52:38.562469 | orchestrator | =============================================================================== 2026-01-08 00:52:38.562486 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.25s 2026-01-08 00:52:38.562491 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2026-01-08 00:52:38.562495 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2026-01-08 00:52:38.562500 | orchestrator | 2026-01-08 00:52:38.562504 | orchestrator | 2026-01-08 00:52:38.562509 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-08 00:52:38.562513 | orchestrator | 2026-01-08 00:52:38.562518 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-08 00:52:38.562523 | orchestrator | Thursday 08 January 2026 00:51:09 +0000 (0:00:00.159) 0:00:00.159 ****** 2026-01-08 00:52:38.562527 | orchestrator | ok: [testbed-manager] 2026-01-08 00:52:38.562532 | orchestrator | 2026-01-08 00:52:38.562537 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-08 00:52:38.562542 | orchestrator | Thursday 08 January 2026 00:51:10 +0000 (0:00:00.578) 0:00:00.738 ****** 2026-01-08 00:52:38.562546 | orchestrator | ok: [testbed-manager] 2026-01-08 00:52:38.562551 | orchestrator | 2026-01-08 00:52:38.562556 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-08 00:52:38.562561 | orchestrator | Thursday 08 January 2026 00:51:10 +0000 (0:00:00.613) 0:00:01.351 ****** 2026-01-08 00:52:38.562565 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-08 00:52:38.562570 | orchestrator | 2026-01-08 00:52:38.562574 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-08 00:52:38.562579 | orchestrator | Thursday 08 January 2026 00:51:11 +0000 (0:00:00.797) 0:00:02.149 ****** 2026-01-08 00:52:38.562584 | orchestrator | changed: [testbed-manager] 2026-01-08 00:52:38.562588 | orchestrator | 2026-01-08 00:52:38.562593 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-08 00:52:38.562597 | orchestrator | Thursday 08 January 2026 00:51:13 +0000 (0:00:01.533) 0:00:03.682 ****** 2026-01-08 00:52:38.562602 | orchestrator | changed: [testbed-manager] 2026-01-08 00:52:38.562606 | orchestrator | 2026-01-08 00:52:38.562611 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-08 00:52:38.562616 | orchestrator | Thursday 08 January 2026 00:51:13 +0000 (0:00:00.586) 0:00:04.268 ****** 2026-01-08 00:52:38.562620 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-08 00:52:38.562625 | orchestrator | 2026-01-08 00:52:38.562630 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-08 00:52:38.562634 | orchestrator | Thursday 08 January 2026 00:51:15 +0000 (0:00:01.734) 0:00:06.003 ****** 2026-01-08 00:52:38.562639 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-08 00:52:38.562643 | orchestrator | 2026-01-08 00:52:38.562648 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-08 00:52:38.562652 | orchestrator | Thursday 08 January 2026 00:51:16 +0000 (0:00:01.031) 0:00:07.034 ****** 2026-01-08 00:52:38.562657 | orchestrator | ok: [testbed-manager] 2026-01-08 00:52:38.562661 | orchestrator | 2026-01-08 00:52:38.562666 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-08 00:52:38.562671 | orchestrator | Thursday 08 January 2026 00:51:16 +0000 (0:00:00.447) 0:00:07.482 ****** 2026-01-08 00:52:38.562675 | orchestrator | ok: [testbed-manager] 2026-01-08 00:52:38.562680 | orchestrator | 2026-01-08 00:52:38.562684 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:52:38.562689 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 00:52:38.562693 | orchestrator | 2026-01-08 00:52:38.562698 | orchestrator | 2026-01-08 00:52:38.562703 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:52:38.562707 | orchestrator | Thursday 08 January 2026 00:51:17 +0000 (0:00:00.381) 0:00:07.863 ****** 2026-01-08 00:52:38.562712 | orchestrator | =============================================================================== 2026-01-08 00:52:38.562720 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.73s 2026-01-08 00:52:38.562724 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.53s 2026-01-08 00:52:38.562729 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.03s 2026-01-08 00:52:38.562741 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2026-01-08 00:52:38.562746 | orchestrator | Create .kube directory -------------------------------------------------- 0.61s 2026-01-08 00:52:38.562753 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.59s 2026-01-08 00:52:38.562758 | orchestrator | Get home directory of operator user ------------------------------------- 0.58s 2026-01-08 00:52:38.562762 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.45s 2026-01-08 00:52:38.562767 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.38s 2026-01-08 00:52:38.562772 | orchestrator | 2026-01-08 00:52:38.564419 | orchestrator | 2026-01-08 00:52:38.564459 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-08 00:52:38.564468 | orchestrator | 2026-01-08 00:52:38.564476 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-08 00:52:38.564480 | orchestrator | Thursday 08 January 2026 00:49:32 +0000 (0:00:00.159) 0:00:00.159 ****** 2026-01-08 00:52:38.564484 | orchestrator | ok: [localhost] => { 2026-01-08 00:52:38.564489 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-08 00:52:38.564493 | orchestrator | } 2026-01-08 00:52:38.564498 | orchestrator | 2026-01-08 00:52:38.564505 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-08 00:52:38.564511 | orchestrator | Thursday 08 January 2026 00:49:33 +0000 (0:00:00.057) 0:00:00.218 ****** 2026-01-08 00:52:38.564517 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-08 00:52:38.564524 | orchestrator | ...ignoring 2026-01-08 00:52:38.564529 | orchestrator | 2026-01-08 00:52:38.564535 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-08 00:52:38.564540 | orchestrator | Thursday 08 January 2026 00:49:35 +0000 (0:00:02.878) 0:00:03.096 ****** 2026-01-08 00:52:38.564546 | orchestrator | skipping: [localhost] 2026-01-08 00:52:38.564551 | orchestrator | 2026-01-08 00:52:38.564557 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-08 00:52:38.564563 | orchestrator | Thursday 08 January 2026 00:49:36 +0000 (0:00:00.187) 0:00:03.283 ****** 2026-01-08 00:52:38.564568 | orchestrator | ok: [localhost] 2026-01-08 00:52:38.564573 | orchestrator | 2026-01-08 00:52:38.564578 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:52:38.564583 | orchestrator | 2026-01-08 00:52:38.564589 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 00:52:38.564595 | orchestrator | Thursday 08 January 2026 00:49:36 +0000 (0:00:00.173) 0:00:03.457 ****** 2026-01-08 00:52:38.564600 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:52:38.564606 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:52:38.564611 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:52:38.564616 | orchestrator | 2026-01-08 00:52:38.564622 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 00:52:38.564627 | orchestrator | Thursday 08 January 2026 00:49:36 +0000 (0:00:00.608) 0:00:04.066 ****** 2026-01-08 00:52:38.564633 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-08 00:52:38.564639 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-08 00:52:38.564645 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-08 00:52:38.564651 | orchestrator | 2026-01-08 00:52:38.564657 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-08 00:52:38.564663 | orchestrator | 2026-01-08 00:52:38.564682 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-08 00:52:38.564689 | orchestrator | Thursday 08 January 2026 00:49:37 +0000 (0:00:00.864) 0:00:04.931 ****** 2026-01-08 00:52:38.564695 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:52:38.564699 | orchestrator | 2026-01-08 00:52:38.564703 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-08 00:52:38.564707 | orchestrator | Thursday 08 January 2026 00:49:38 +0000 (0:00:00.520) 0:00:05.451 ****** 2026-01-08 00:52:38.564710 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:52:38.564714 | orchestrator | 2026-01-08 00:52:38.564718 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-08 00:52:38.564722 | orchestrator | Thursday 08 January 2026 00:49:39 +0000 (0:00:00.891) 0:00:06.342 ****** 2026-01-08 00:52:38.564726 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.564730 | orchestrator | 2026-01-08 00:52:38.564734 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-08 00:52:38.564737 | orchestrator | Thursday 08 January 2026 00:49:39 +0000 (0:00:00.288) 0:00:06.630 ****** 2026-01-08 00:52:38.564741 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.564745 | orchestrator | 2026-01-08 00:52:38.564749 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-08 00:52:38.564753 | orchestrator | Thursday 08 January 2026 00:49:39 +0000 (0:00:00.303) 0:00:06.934 ****** 2026-01-08 00:52:38.564756 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.564760 | orchestrator | 2026-01-08 00:52:38.564764 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-08 00:52:38.564768 | orchestrator | Thursday 08 January 2026 00:49:40 +0000 (0:00:00.325) 0:00:07.259 ****** 2026-01-08 00:52:38.564771 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.564775 | orchestrator | 2026-01-08 00:52:38.564779 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-08 00:52:38.564783 | orchestrator | Thursday 08 January 2026 00:49:41 +0000 (0:00:01.079) 0:00:08.338 ****** 2026-01-08 00:52:38.564787 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:52:38.564790 | orchestrator | 2026-01-08 00:52:38.564794 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-08 00:52:38.564798 | orchestrator | Thursday 08 January 2026 00:49:42 +0000 (0:00:00.890) 0:00:09.229 ****** 2026-01-08 00:52:38.564802 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:52:38.564805 | orchestrator | 2026-01-08 00:52:38.564809 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-08 00:52:38.564820 | orchestrator | Thursday 08 January 2026 00:49:42 +0000 (0:00:00.792) 0:00:10.021 ****** 2026-01-08 00:52:38.564824 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.564828 | orchestrator | 2026-01-08 00:52:38.564832 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-08 00:52:38.564836 | orchestrator | Thursday 08 January 2026 00:49:43 +0000 (0:00:00.474) 0:00:10.496 ****** 2026-01-08 00:52:38.564839 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.564843 | orchestrator | 2026-01-08 00:52:38.564859 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-08 00:52:38.564863 | orchestrator | Thursday 08 January 2026 00:49:43 +0000 (0:00:00.535) 0:00:11.031 ****** 2026-01-08 00:52:38.564869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.564879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.564884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.564888 | orchestrator | 2026-01-08 00:52:38.564892 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-08 00:52:38.564896 | orchestrator | Thursday 08 January 2026 00:49:45 +0000 (0:00:01.940) 0:00:12.972 ****** 2026-01-08 00:52:38.564907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.564916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.564930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.564937 | orchestrator | 2026-01-08 00:52:38.564943 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-08 00:52:38.564949 | orchestrator | Thursday 08 January 2026 00:49:49 +0000 (0:00:03.508) 0:00:16.481 ****** 2026-01-08 00:52:38.564955 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-08 00:52:38.564961 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-08 00:52:38.564967 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-08 00:52:38.564972 | orchestrator | 2026-01-08 00:52:38.564978 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-08 00:52:38.564999 | orchestrator | Thursday 08 January 2026 00:49:51 +0000 (0:00:01.798) 0:00:18.280 ****** 2026-01-08 00:52:38.565005 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-08 00:52:38.565011 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-08 00:52:38.565017 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-08 00:52:38.565034 | orchestrator | 2026-01-08 00:52:38.565041 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-08 00:52:38.565047 | orchestrator | Thursday 08 January 2026 00:49:53 +0000 (0:00:02.235) 0:00:20.515 ****** 2026-01-08 00:52:38.565052 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-08 00:52:38.565063 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-08 00:52:38.565069 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-08 00:52:38.565075 | orchestrator | 2026-01-08 00:52:38.565080 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-08 00:52:38.565091 | orchestrator | Thursday 08 January 2026 00:49:54 +0000 (0:00:01.483) 0:00:21.999 ****** 2026-01-08 00:52:38.565102 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-08 00:52:38.565109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-08 00:52:38.565114 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-08 00:52:38.565120 | orchestrator | 2026-01-08 00:52:38.565126 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-08 00:52:38.565133 | orchestrator | Thursday 08 January 2026 00:49:56 +0000 (0:00:01.480) 0:00:23.480 ****** 2026-01-08 00:52:38.565138 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-08 00:52:38.565144 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-08 00:52:38.565150 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-08 00:52:38.565156 | orchestrator | 2026-01-08 00:52:38.565162 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-08 00:52:38.565168 | orchestrator | Thursday 08 January 2026 00:49:58 +0000 (0:00:02.240) 0:00:25.720 ****** 2026-01-08 00:52:38.565173 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-08 00:52:38.565179 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-08 00:52:38.565186 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-08 00:52:38.565192 | orchestrator | 2026-01-08 00:52:38.565198 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-08 00:52:38.565204 | orchestrator | Thursday 08 January 2026 00:49:59 +0000 (0:00:01.451) 0:00:27.172 ****** 2026-01-08 00:52:38.565210 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:52:38.565216 | orchestrator | 2026-01-08 00:52:38.565221 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-01-08 00:52:38.565227 | orchestrator | Thursday 08 January 2026 00:50:00 +0000 (0:00:00.868) 0:00:28.041 ****** 2026-01-08 00:52:38.565235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.565242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.565262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.565269 | orchestrator | 2026-01-08 00:52:38.565274 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-01-08 00:52:38.565280 | orchestrator | Thursday 08 January 2026 00:50:02 +0000 (0:00:01.345) 0:00:29.386 ****** 2026-01-08 00:52:38.565286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:52:38.565293 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.565299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:52:38.565309 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:52:38.565324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:52:38.565331 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:52:38.565337 | orchestrator | 2026-01-08 00:52:38.565343 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-01-08 00:52:38.565348 | orchestrator | Thursday 08 January 2026 00:50:02 +0000 (0:00:00.478) 0:00:29.864 ****** 2026-01-08 00:52:38.565354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:52:38.565361 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.565367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:52:38.565374 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:52:38.565380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:52:38.565390 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:52:38.565396 | orchestrator | 2026-01-08 00:52:38.565402 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-01-08 00:52:38.565408 | orchestrator | Thursday 08 January 2026 00:50:03 +0000 (0:00:00.843) 0:00:30.708 ****** 2026-01-08 00:52:38.565420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.565427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.565433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:52:38.565445 | orchestrator | 2026-01-08 00:52:38.565451 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-01-08 00:52:38.565457 | orchestrator | Thursday 08 January 2026 00:50:04 +0000 (0:00:01.433) 0:00:32.141 ****** 2026-01-08 00:52:38.565463 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:52:38.565469 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:52:38.565475 | orchestrator | } 2026-01-08 00:52:38.565481 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:52:38.565487 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:52:38.565492 | orchestrator | } 2026-01-08 00:52:38.565498 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:52:38.565504 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:52:38.565510 | orchestrator | } 2026-01-08 00:52:38.565516 | orchestrator | 2026-01-08 00:52:38.565522 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:52:38.565528 | orchestrator | Thursday 08 January 2026 00:50:05 +0000 (0:00:00.518) 0:00:32.659 ****** 2026-01-08 00:52:38.565540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:52:38.565547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:52:38.565553 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.565559 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:52:38.565565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:52:38.565575 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:52:38.565581 | orchestrator | 2026-01-08 00:52:38.565587 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-08 00:52:38.565593 | orchestrator | Thursday 08 January 2026 00:50:06 +0000 (0:00:00.899) 0:00:33.559 ****** 2026-01-08 00:52:38.565599 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:52:38.565605 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:52:38.565611 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:52:38.565618 | orchestrator | 2026-01-08 00:52:38.565624 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-08 00:52:38.565630 | orchestrator | Thursday 08 January 2026 00:50:07 +0000 (0:00:01.259) 0:00:34.819 ****** 2026-01-08 00:52:38.565636 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:52:38.565641 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:52:38.565647 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:52:38.565653 | orchestrator | 2026-01-08 00:52:38.565659 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-08 00:52:38.565665 | orchestrator | Thursday 08 January 2026 00:50:15 +0000 (0:00:07.681) 0:00:42.500 ****** 2026-01-08 00:52:38.565671 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:52:38.565676 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:52:38.565682 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:52:38.565688 | orchestrator | 2026-01-08 00:52:38.565694 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-08 00:52:38.565700 | orchestrator | 2026-01-08 00:52:38.565706 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-08 00:52:38.565712 | orchestrator | Thursday 08 January 2026 00:50:16 +0000 (0:00:00.741) 0:00:43.242 ****** 2026-01-08 00:52:38.565718 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:52:38.565723 | orchestrator | 2026-01-08 00:52:38.565729 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-08 00:52:38.565738 | orchestrator | Thursday 08 January 2026 00:50:16 +0000 (0:00:00.600) 0:00:43.842 ****** 2026-01-08 00:52:38.565744 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:52:38.565750 | orchestrator | 2026-01-08 00:52:38.565756 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-08 00:52:38.565762 | orchestrator | Thursday 08 January 2026 00:50:16 +0000 (0:00:00.126) 0:00:43.969 ****** 2026-01-08 00:52:38.565768 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:52:38.565775 | orchestrator | 2026-01-08 00:52:38.565784 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-08 00:52:38.565790 | orchestrator | Thursday 08 January 2026 00:50:18 +0000 (0:00:01.645) 0:00:45.615 ****** 2026-01-08 00:52:38.565797 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:52:38.565803 | orchestrator | 2026-01-08 00:52:38.565808 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-08 00:52:38.565814 | orchestrator | 2026-01-08 00:52:38.565820 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-08 00:52:38.565826 | orchestrator | Thursday 08 January 2026 00:52:10 +0000 (0:01:52.466) 0:02:38.082 ****** 2026-01-08 00:52:38.565832 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:52:38.565838 | orchestrator | 2026-01-08 00:52:38.565844 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-08 00:52:38.565850 | orchestrator | Thursday 08 January 2026 00:52:11 +0000 (0:00:00.741) 0:02:38.823 ****** 2026-01-08 00:52:38.565859 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:52:38.565865 | orchestrator | 2026-01-08 00:52:38.565871 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-08 00:52:38.565877 | orchestrator | Thursday 08 January 2026 00:52:11 +0000 (0:00:00.119) 0:02:38.943 ****** 2026-01-08 00:52:38.565883 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:52:38.565889 | orchestrator | 2026-01-08 00:52:38.565895 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-08 00:52:38.565901 | orchestrator | Thursday 08 January 2026 00:52:13 +0000 (0:00:01.682) 0:02:40.626 ****** 2026-01-08 00:52:38.565907 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:52:38.565912 | orchestrator | 2026-01-08 00:52:38.565919 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-08 00:52:38.565924 | orchestrator | 2026-01-08 00:52:38.565930 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-08 00:52:38.565936 | orchestrator | Thursday 08 January 2026 00:52:23 +0000 (0:00:10.425) 0:02:51.051 ****** 2026-01-08 00:52:38.565942 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:52:38.565948 | orchestrator | 2026-01-08 00:52:38.565954 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-08 00:52:38.565960 | orchestrator | Thursday 08 January 2026 00:52:24 +0000 (0:00:00.615) 0:02:51.667 ****** 2026-01-08 00:52:38.565966 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:52:38.565972 | orchestrator | 2026-01-08 00:52:38.565978 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-08 00:52:38.566066 | orchestrator | Thursday 08 January 2026 00:52:24 +0000 (0:00:00.132) 0:02:51.799 ****** 2026-01-08 00:52:38.566080 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:52:38.566087 | orchestrator | 2026-01-08 00:52:38.566093 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-08 00:52:38.566100 | orchestrator | Thursday 08 January 2026 00:52:26 +0000 (0:00:01.463) 0:02:53.263 ****** 2026-01-08 00:52:38.566106 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:52:38.566113 | orchestrator | 2026-01-08 00:52:38.566119 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-08 00:52:38.566126 | orchestrator | 2026-01-08 00:52:38.566132 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-08 00:52:38.566138 | orchestrator | Thursday 08 January 2026 00:52:33 +0000 (0:00:07.858) 0:03:01.121 ****** 2026-01-08 00:52:38.566144 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:52:38.566149 | orchestrator | 2026-01-08 00:52:38.566155 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-08 00:52:38.566162 | orchestrator | Thursday 08 January 2026 00:52:34 +0000 (0:00:00.851) 0:03:01.973 ****** 2026-01-08 00:52:38.566168 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:52:38.566174 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:52:38.566180 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:52:38.566186 | orchestrator | 2026-01-08 00:52:38.566192 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:52:38.566199 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-08 00:52:38.566205 | orchestrator | testbed-node-0 : ok=26  changed=16  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-08 00:52:38.566211 | orchestrator | testbed-node-1 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-08 00:52:38.566217 | orchestrator | testbed-node-2 : ok=24  changed=16  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-08 00:52:38.566223 | orchestrator | 2026-01-08 00:52:38.566230 | orchestrator | 2026-01-08 00:52:38.566236 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:52:38.566249 | orchestrator | Thursday 08 January 2026 00:52:37 +0000 (0:00:02.608) 0:03:04.581 ****** 2026-01-08 00:52:38.566255 | orchestrator | =============================================================================== 2026-01-08 00:52:38.566262 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------ 130.75s 2026-01-08 00:52:38.566268 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.68s 2026-01-08 00:52:38.566274 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.79s 2026-01-08 00:52:38.566280 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.51s 2026-01-08 00:52:38.566289 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.88s 2026-01-08 00:52:38.566295 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.61s 2026-01-08 00:52:38.566300 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.24s 2026-01-08 00:52:38.566313 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.24s 2026-01-08 00:52:38.566319 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.96s 2026-01-08 00:52:38.566325 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.94s 2026-01-08 00:52:38.566331 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.80s 2026-01-08 00:52:38.566337 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.48s 2026-01-08 00:52:38.566343 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.48s 2026-01-08 00:52:38.566349 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.45s 2026-01-08 00:52:38.566355 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 1.43s 2026-01-08 00:52:38.566361 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.35s 2026-01-08 00:52:38.566367 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.26s 2026-01-08 00:52:38.566373 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.08s 2026-01-08 00:52:38.566379 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.90s 2026-01-08 00:52:38.566386 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.89s 2026-01-08 00:52:38.566391 | orchestrator | 2026-01-08 00:52:38 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:38.566397 | orchestrator | 2026-01-08 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:41.611114 | orchestrator | 2026-01-08 00:52:41 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:41.612405 | orchestrator | 2026-01-08 00:52:41 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:41.615851 | orchestrator | 2026-01-08 00:52:41 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:41.615916 | orchestrator | 2026-01-08 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:44.655724 | orchestrator | 2026-01-08 00:52:44 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:44.656841 | orchestrator | 2026-01-08 00:52:44 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:44.658219 | orchestrator | 2026-01-08 00:52:44 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:44.658264 | orchestrator | 2026-01-08 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:47.701811 | orchestrator | 2026-01-08 00:52:47 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:47.704045 | orchestrator | 2026-01-08 00:52:47 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:47.705431 | orchestrator | 2026-01-08 00:52:47 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:47.705570 | orchestrator | 2026-01-08 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:50.733691 | orchestrator | 2026-01-08 00:52:50 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:50.734090 | orchestrator | 2026-01-08 00:52:50 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:50.735303 | orchestrator | 2026-01-08 00:52:50 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:50.735354 | orchestrator | 2026-01-08 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:53.776538 | orchestrator | 2026-01-08 00:52:53 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:53.777197 | orchestrator | 2026-01-08 00:52:53 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:53.778160 | orchestrator | 2026-01-08 00:52:53 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:53.778440 | orchestrator | 2026-01-08 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:56.817339 | orchestrator | 2026-01-08 00:52:56 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:56.818381 | orchestrator | 2026-01-08 00:52:56 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:56.819327 | orchestrator | 2026-01-08 00:52:56 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:56.819475 | orchestrator | 2026-01-08 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:52:59.854546 | orchestrator | 2026-01-08 00:52:59 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:52:59.855807 | orchestrator | 2026-01-08 00:52:59 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:52:59.857373 | orchestrator | 2026-01-08 00:52:59 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:52:59.857440 | orchestrator | 2026-01-08 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:02.895766 | orchestrator | 2026-01-08 00:53:02 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:02.897027 | orchestrator | 2026-01-08 00:53:02 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:02.897755 | orchestrator | 2026-01-08 00:53:02 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:02.897803 | orchestrator | 2026-01-08 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:05.958113 | orchestrator | 2026-01-08 00:53:05 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:05.959248 | orchestrator | 2026-01-08 00:53:05 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:05.961738 | orchestrator | 2026-01-08 00:53:05 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:05.961796 | orchestrator | 2026-01-08 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:08.996075 | orchestrator | 2026-01-08 00:53:08 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:08.996185 | orchestrator | 2026-01-08 00:53:08 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:08.996683 | orchestrator | 2026-01-08 00:53:08 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:08.996745 | orchestrator | 2026-01-08 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:12.046074 | orchestrator | 2026-01-08 00:53:12 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:12.047396 | orchestrator | 2026-01-08 00:53:12 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:12.048298 | orchestrator | 2026-01-08 00:53:12 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:12.048330 | orchestrator | 2026-01-08 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:15.091165 | orchestrator | 2026-01-08 00:53:15 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:15.093189 | orchestrator | 2026-01-08 00:53:15 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:15.093232 | orchestrator | 2026-01-08 00:53:15 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:15.093238 | orchestrator | 2026-01-08 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:18.172572 | orchestrator | 2026-01-08 00:53:18 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:18.172654 | orchestrator | 2026-01-08 00:53:18 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:18.173370 | orchestrator | 2026-01-08 00:53:18 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:18.175601 | orchestrator | 2026-01-08 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:21.219422 | orchestrator | 2026-01-08 00:53:21 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:21.221710 | orchestrator | 2026-01-08 00:53:21 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:21.222559 | orchestrator | 2026-01-08 00:53:21 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:21.222578 | orchestrator | 2026-01-08 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:24.261626 | orchestrator | 2026-01-08 00:53:24 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:24.261963 | orchestrator | 2026-01-08 00:53:24 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:24.263022 | orchestrator | 2026-01-08 00:53:24 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:24.263084 | orchestrator | 2026-01-08 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:27.301107 | orchestrator | 2026-01-08 00:53:27 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:27.305221 | orchestrator | 2026-01-08 00:53:27 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:27.306432 | orchestrator | 2026-01-08 00:53:27 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:27.306469 | orchestrator | 2026-01-08 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:30.349172 | orchestrator | 2026-01-08 00:53:30 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:30.351965 | orchestrator | 2026-01-08 00:53:30 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:30.352669 | orchestrator | 2026-01-08 00:53:30 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:30.352726 | orchestrator | 2026-01-08 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:33.391459 | orchestrator | 2026-01-08 00:53:33 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:33.391846 | orchestrator | 2026-01-08 00:53:33 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:33.392857 | orchestrator | 2026-01-08 00:53:33 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:33.392899 | orchestrator | 2026-01-08 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:36.451060 | orchestrator | 2026-01-08 00:53:36 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:36.452680 | orchestrator | 2026-01-08 00:53:36 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:36.454312 | orchestrator | 2026-01-08 00:53:36 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:36.454417 | orchestrator | 2026-01-08 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:39.490534 | orchestrator | 2026-01-08 00:53:39 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:39.490611 | orchestrator | 2026-01-08 00:53:39 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:39.491732 | orchestrator | 2026-01-08 00:53:39 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:39.491786 | orchestrator | 2026-01-08 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:42.529112 | orchestrator | 2026-01-08 00:53:42 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:42.529866 | orchestrator | 2026-01-08 00:53:42 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:42.530846 | orchestrator | 2026-01-08 00:53:42 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:42.530948 | orchestrator | 2026-01-08 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:45.563717 | orchestrator | 2026-01-08 00:53:45 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:45.563944 | orchestrator | 2026-01-08 00:53:45 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:45.564622 | orchestrator | 2026-01-08 00:53:45 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:45.564651 | orchestrator | 2026-01-08 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:48.595592 | orchestrator | 2026-01-08 00:53:48 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:48.597848 | orchestrator | 2026-01-08 00:53:48 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:48.599808 | orchestrator | 2026-01-08 00:53:48 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:48.599874 | orchestrator | 2026-01-08 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:51.640137 | orchestrator | 2026-01-08 00:53:51 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:51.644086 | orchestrator | 2026-01-08 00:53:51 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:51.646100 | orchestrator | 2026-01-08 00:53:51 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:51.646146 | orchestrator | 2026-01-08 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:54.693834 | orchestrator | 2026-01-08 00:53:54 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:54.694090 | orchestrator | 2026-01-08 00:53:54 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:54.694960 | orchestrator | 2026-01-08 00:53:54 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:54.694998 | orchestrator | 2026-01-08 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:53:57.734982 | orchestrator | 2026-01-08 00:53:57 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:53:57.736944 | orchestrator | 2026-01-08 00:53:57 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state STARTED 2026-01-08 00:53:57.739977 | orchestrator | 2026-01-08 00:53:57 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:53:57.740041 | orchestrator | 2026-01-08 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:00.785227 | orchestrator | 2026-01-08 00:54:00 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:00.787736 | orchestrator | 2026-01-08 00:54:00 | INFO  | Task 3eac4e53-7ac6-400a-a3f4-66790f2129c0 is in state SUCCESS 2026-01-08 00:54:00.788762 | orchestrator | 2026-01-08 00:54:00.788810 | orchestrator | 2026-01-08 00:54:00.788823 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:54:00.788836 | orchestrator | 2026-01-08 00:54:00.788848 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 00:54:00.788967 | orchestrator | Thursday 08 January 2026 00:50:24 +0000 (0:00:00.203) 0:00:00.203 ****** 2026-01-08 00:54:00.789005 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.789024 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.789321 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.789347 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:54:00.789368 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:54:00.789418 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:54:00.789434 | orchestrator | 2026-01-08 00:54:00.789448 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 00:54:00.789461 | orchestrator | Thursday 08 January 2026 00:50:25 +0000 (0:00:00.717) 0:00:00.920 ****** 2026-01-08 00:54:00.789474 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-08 00:54:00.789488 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-08 00:54:00.789501 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-08 00:54:00.789514 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-08 00:54:00.789526 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-08 00:54:00.789538 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-08 00:54:00.789551 | orchestrator | 2026-01-08 00:54:00.789564 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-08 00:54:00.789578 | orchestrator | 2026-01-08 00:54:00.789590 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-08 00:54:00.789603 | orchestrator | Thursday 08 January 2026 00:50:26 +0000 (0:00:01.044) 0:00:01.965 ****** 2026-01-08 00:54:00.789618 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:54:00.789633 | orchestrator | 2026-01-08 00:54:00.789646 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-08 00:54:00.789659 | orchestrator | Thursday 08 January 2026 00:50:27 +0000 (0:00:01.162) 0:00:03.127 ****** 2026-01-08 00:54:00.789673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.789742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.789765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.789777 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.789788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.789800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.789811 | orchestrator | 2026-01-08 00:54:00.789911 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-08 00:54:00.789926 | orchestrator | Thursday 08 January 2026 00:50:29 +0000 (0:00:01.427) 0:00:04.554 ****** 2026-01-08 00:54:00.789937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.789949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790140 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790152 | orchestrator | 2026-01-08 00:54:00.790181 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-08 00:54:00.790193 | orchestrator | Thursday 08 January 2026 00:50:31 +0000 (0:00:02.061) 0:00:06.616 ****** 2026-01-08 00:54:00.790205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790294 | orchestrator | 2026-01-08 00:54:00.790305 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-08 00:54:00.790316 | orchestrator | Thursday 08 January 2026 00:50:32 +0000 (0:00:01.269) 0:00:07.886 ****** 2026-01-08 00:54:00.790328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790391 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790402 | orchestrator | 2026-01-08 00:54:00.790427 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-01-08 00:54:00.790438 | orchestrator | Thursday 08 January 2026 00:50:34 +0000 (0:00:01.962) 0:00:09.848 ****** 2026-01-08 00:54:00.790449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.790524 | orchestrator | 2026-01-08 00:54:00.790535 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-01-08 00:54:00.790553 | orchestrator | Thursday 08 January 2026 00:50:36 +0000 (0:00:02.083) 0:00:11.932 ****** 2026-01-08 00:54:00.790564 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:54:00.790576 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.790587 | orchestrator | } 2026-01-08 00:54:00.790599 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:54:00.790610 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.790621 | orchestrator | } 2026-01-08 00:54:00.790632 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:54:00.790643 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.790654 | orchestrator | } 2026-01-08 00:54:00.790665 | orchestrator | changed: [testbed-node-3] => { 2026-01-08 00:54:00.790676 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.790718 | orchestrator | } 2026-01-08 00:54:00.790729 | orchestrator | changed: [testbed-node-4] => { 2026-01-08 00:54:00.790740 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.790751 | orchestrator | } 2026-01-08 00:54:00.790762 | orchestrator | changed: [testbed-node-5] => { 2026-01-08 00:54:00.790772 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.790783 | orchestrator | } 2026-01-08 00:54:00.790794 | orchestrator | 2026-01-08 00:54:00.790805 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:54:00.790816 | orchestrator | Thursday 08 January 2026 00:50:37 +0000 (0:00:00.922) 0:00:12.854 ****** 2026-01-08 00:54:00.790828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.790846 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.790865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.790877 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.790888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.790900 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.790911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.790922 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:54:00.790933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.790945 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:54:00.790956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.790967 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:54:00.790978 | orchestrator | 2026-01-08 00:54:00.790989 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-08 00:54:00.791000 | orchestrator | Thursday 08 January 2026 00:50:38 +0000 (0:00:01.312) 0:00:14.167 ****** 2026-01-08 00:54:00.791011 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.791022 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.791033 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:54:00.791044 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:54:00.791055 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:54:00.791066 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.791076 | orchestrator | 2026-01-08 00:54:00.791093 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-08 00:54:00.791105 | orchestrator | Thursday 08 January 2026 00:50:41 +0000 (0:00:02.897) 0:00:17.065 ****** 2026-01-08 00:54:00.791116 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-08 00:54:00.791127 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-08 00:54:00.791138 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-08 00:54:00.791157 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-08 00:54:00.791176 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-08 00:54:00.791195 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-08 00:54:00.791224 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-08 00:54:00.791244 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-08 00:54:00.791261 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-08 00:54:00.791279 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-08 00:54:00.791298 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-08 00:54:00.791317 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-08 00:54:00.791347 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-08 00:54:00.791365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-08 00:54:00.791384 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-08 00:54:00.791402 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-08 00:54:00.791420 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-08 00:54:00.791431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-08 00:54:00.791444 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-08 00:54:00.791454 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-08 00:54:00.791466 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-08 00:54:00.791477 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-08 00:54:00.791489 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-08 00:54:00.791500 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-08 00:54:00.791511 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-08 00:54:00.791522 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-08 00:54:00.791533 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-08 00:54:00.791544 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-08 00:54:00.791555 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-08 00:54:00.791566 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-08 00:54:00.791577 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-08 00:54:00.791588 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-08 00:54:00.791600 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-08 00:54:00.791621 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-08 00:54:00.791633 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-08 00:54:00.791644 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-08 00:54:00.791655 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-08 00:54:00.791672 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-08 00:54:00.791729 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-08 00:54:00.791751 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-08 00:54:00.791769 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-08 00:54:00.791785 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-08 00:54:00.791803 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-08 00:54:00.791821 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-08 00:54:00.791840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-08 00:54:00.791859 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-08 00:54:00.791877 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-08 00:54:00.791906 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-08 00:54:00.791923 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-08 00:54:00.791935 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-08 00:54:00.791946 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-08 00:54:00.791957 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-08 00:54:00.791967 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-08 00:54:00.791978 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-08 00:54:00.791989 | orchestrator | 2026-01-08 00:54:00.792000 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-08 00:54:00.792011 | orchestrator | Thursday 08 January 2026 00:51:00 +0000 (0:00:19.098) 0:00:36.164 ****** 2026-01-08 00:54:00.792022 | orchestrator | 2026-01-08 00:54:00.792033 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-08 00:54:00.792044 | orchestrator | Thursday 08 January 2026 00:51:00 +0000 (0:00:00.054) 0:00:36.218 ****** 2026-01-08 00:54:00.792055 | orchestrator | 2026-01-08 00:54:00.792066 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-08 00:54:00.792077 | orchestrator | Thursday 08 January 2026 00:51:00 +0000 (0:00:00.052) 0:00:36.271 ****** 2026-01-08 00:54:00.792088 | orchestrator | 2026-01-08 00:54:00.792116 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-08 00:54:00.792127 | orchestrator | Thursday 08 January 2026 00:51:00 +0000 (0:00:00.047) 0:00:36.318 ****** 2026-01-08 00:54:00.792138 | orchestrator | 2026-01-08 00:54:00.792149 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-08 00:54:00.792160 | orchestrator | Thursday 08 January 2026 00:51:00 +0000 (0:00:00.050) 0:00:36.369 ****** 2026-01-08 00:54:00.792171 | orchestrator | 2026-01-08 00:54:00.792181 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-08 00:54:00.792192 | orchestrator | Thursday 08 January 2026 00:51:01 +0000 (0:00:00.050) 0:00:36.420 ****** 2026-01-08 00:54:00.792203 | orchestrator | 2026-01-08 00:54:00.792214 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-08 00:54:00.792225 | orchestrator | Thursday 08 January 2026 00:51:01 +0000 (0:00:00.053) 0:00:36.473 ****** 2026-01-08 00:54:00.792236 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.792248 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.792259 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:54:00.792270 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:54:00.792281 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.792292 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:54:00.792302 | orchestrator | 2026-01-08 00:54:00.792313 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-08 00:54:00.792324 | orchestrator | Thursday 08 January 2026 00:51:03 +0000 (0:00:01.995) 0:00:38.469 ****** 2026-01-08 00:54:00.792335 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.792346 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.792357 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:54:00.792367 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.792378 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:54:00.792389 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:54:00.792400 | orchestrator | 2026-01-08 00:54:00.792411 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-08 00:54:00.792421 | orchestrator | 2026-01-08 00:54:00.792432 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-08 00:54:00.792451 | orchestrator | Thursday 08 January 2026 00:51:11 +0000 (0:00:08.823) 0:00:47.293 ****** 2026-01-08 00:54:00.792462 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:54:00.792473 | orchestrator | 2026-01-08 00:54:00.792484 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-08 00:54:00.792495 | orchestrator | Thursday 08 January 2026 00:51:12 +0000 (0:00:00.692) 0:00:47.985 ****** 2026-01-08 00:54:00.792505 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:54:00.792516 | orchestrator | 2026-01-08 00:54:00.792527 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-08 00:54:00.792538 | orchestrator | Thursday 08 January 2026 00:51:13 +0000 (0:00:00.772) 0:00:48.758 ****** 2026-01-08 00:54:00.792549 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.792560 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.792571 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.792582 | orchestrator | 2026-01-08 00:54:00.792592 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-08 00:54:00.792603 | orchestrator | Thursday 08 January 2026 00:51:14 +0000 (0:00:00.991) 0:00:49.750 ****** 2026-01-08 00:54:00.792614 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.792625 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.792636 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.792647 | orchestrator | 2026-01-08 00:54:00.792658 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-08 00:54:00.792669 | orchestrator | Thursday 08 January 2026 00:51:14 +0000 (0:00:00.564) 0:00:50.314 ****** 2026-01-08 00:54:00.792707 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.792738 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.792755 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.792766 | orchestrator | 2026-01-08 00:54:00.792777 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-08 00:54:00.792795 | orchestrator | Thursday 08 January 2026 00:51:15 +0000 (0:00:00.958) 0:00:51.273 ****** 2026-01-08 00:54:00.792807 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.792817 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.792828 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.792839 | orchestrator | 2026-01-08 00:54:00.792850 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-08 00:54:00.792861 | orchestrator | Thursday 08 January 2026 00:51:16 +0000 (0:00:00.667) 0:00:51.941 ****** 2026-01-08 00:54:00.792872 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.792883 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.792894 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.792905 | orchestrator | 2026-01-08 00:54:00.792915 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-08 00:54:00.792926 | orchestrator | Thursday 08 January 2026 00:51:17 +0000 (0:00:00.665) 0:00:52.606 ****** 2026-01-08 00:54:00.792937 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.792948 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.792959 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.792970 | orchestrator | 2026-01-08 00:54:00.792981 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-08 00:54:00.792991 | orchestrator | Thursday 08 January 2026 00:51:17 +0000 (0:00:00.349) 0:00:52.956 ****** 2026-01-08 00:54:00.793002 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793013 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793024 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793035 | orchestrator | 2026-01-08 00:54:00.793047 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-08 00:54:00.793058 | orchestrator | Thursday 08 January 2026 00:51:17 +0000 (0:00:00.405) 0:00:53.361 ****** 2026-01-08 00:54:00.793069 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793080 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793090 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793101 | orchestrator | 2026-01-08 00:54:00.793112 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-08 00:54:00.793123 | orchestrator | Thursday 08 January 2026 00:51:18 +0000 (0:00:00.229) 0:00:53.591 ****** 2026-01-08 00:54:00.793135 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793145 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793156 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793167 | orchestrator | 2026-01-08 00:54:00.793178 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-08 00:54:00.793189 | orchestrator | Thursday 08 January 2026 00:51:18 +0000 (0:00:00.225) 0:00:53.816 ****** 2026-01-08 00:54:00.793200 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793211 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793222 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793233 | orchestrator | 2026-01-08 00:54:00.793244 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-08 00:54:00.793256 | orchestrator | Thursday 08 January 2026 00:51:18 +0000 (0:00:00.277) 0:00:54.094 ****** 2026-01-08 00:54:00.793267 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793277 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793288 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793299 | orchestrator | 2026-01-08 00:54:00.793310 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-08 00:54:00.793321 | orchestrator | Thursday 08 January 2026 00:51:19 +0000 (0:00:00.393) 0:00:54.488 ****** 2026-01-08 00:54:00.793332 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793343 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793354 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793372 | orchestrator | 2026-01-08 00:54:00.793383 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-08 00:54:00.793394 | orchestrator | Thursday 08 January 2026 00:51:19 +0000 (0:00:00.296) 0:00:54.784 ****** 2026-01-08 00:54:00.793405 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793416 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793427 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793438 | orchestrator | 2026-01-08 00:54:00.793449 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-08 00:54:00.793466 | orchestrator | Thursday 08 January 2026 00:51:19 +0000 (0:00:00.314) 0:00:55.099 ****** 2026-01-08 00:54:00.793477 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793488 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793498 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793509 | orchestrator | 2026-01-08 00:54:00.793520 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-08 00:54:00.793531 | orchestrator | Thursday 08 January 2026 00:51:19 +0000 (0:00:00.280) 0:00:55.379 ****** 2026-01-08 00:54:00.793542 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793553 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793564 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793575 | orchestrator | 2026-01-08 00:54:00.793586 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-08 00:54:00.793597 | orchestrator | Thursday 08 January 2026 00:51:20 +0000 (0:00:00.301) 0:00:55.681 ****** 2026-01-08 00:54:00.793608 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793619 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793630 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793641 | orchestrator | 2026-01-08 00:54:00.793652 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-08 00:54:00.793663 | orchestrator | Thursday 08 January 2026 00:51:20 +0000 (0:00:00.435) 0:00:56.116 ****** 2026-01-08 00:54:00.793674 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.793722 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.793734 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.793745 | orchestrator | 2026-01-08 00:54:00.793756 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-08 00:54:00.793767 | orchestrator | Thursday 08 January 2026 00:51:21 +0000 (0:00:00.298) 0:00:56.415 ****** 2026-01-08 00:54:00.793778 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:54:00.793789 | orchestrator | 2026-01-08 00:54:00.793808 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-08 00:54:00.793820 | orchestrator | Thursday 08 January 2026 00:51:21 +0000 (0:00:00.563) 0:00:56.978 ****** 2026-01-08 00:54:00.793831 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.793842 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.793853 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.793864 | orchestrator | 2026-01-08 00:54:00.793875 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-08 00:54:00.793887 | orchestrator | Thursday 08 January 2026 00:51:22 +0000 (0:00:00.874) 0:00:57.853 ****** 2026-01-08 00:54:00.793898 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.793910 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.793921 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.793931 | orchestrator | 2026-01-08 00:54:00.793943 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-08 00:54:00.793954 | orchestrator | Thursday 08 January 2026 00:51:23 +0000 (0:00:00.617) 0:00:58.470 ****** 2026-01-08 00:54:00.793990 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.794002 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.794055 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.794069 | orchestrator | 2026-01-08 00:54:00.794080 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-08 00:54:00.794101 | orchestrator | Thursday 08 January 2026 00:51:23 +0000 (0:00:00.402) 0:00:58.873 ****** 2026-01-08 00:54:00.794118 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.794136 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.794157 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.794177 | orchestrator | 2026-01-08 00:54:00.794197 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-08 00:54:00.794216 | orchestrator | Thursday 08 January 2026 00:51:23 +0000 (0:00:00.445) 0:00:59.318 ****** 2026-01-08 00:54:00.794234 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.794251 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.794262 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.794274 | orchestrator | 2026-01-08 00:54:00.794285 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-08 00:54:00.794296 | orchestrator | Thursday 08 January 2026 00:51:24 +0000 (0:00:00.680) 0:00:59.998 ****** 2026-01-08 00:54:00.794307 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.794318 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.794330 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.794340 | orchestrator | 2026-01-08 00:54:00.794351 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-08 00:54:00.794363 | orchestrator | Thursday 08 January 2026 00:51:24 +0000 (0:00:00.344) 0:01:00.343 ****** 2026-01-08 00:54:00.794374 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.794385 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.794395 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.794406 | orchestrator | 2026-01-08 00:54:00.794417 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-08 00:54:00.794429 | orchestrator | Thursday 08 January 2026 00:51:25 +0000 (0:00:00.345) 0:01:00.688 ****** 2026-01-08 00:54:00.794440 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.794451 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.794462 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.794473 | orchestrator | 2026-01-08 00:54:00.794484 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-08 00:54:00.794495 | orchestrator | Thursday 08 January 2026 00:51:25 +0000 (0:00:00.352) 0:01:01.041 ****** 2026-01-08 00:54:00.794519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.794757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.794804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.794871 | orchestrator | 2026-01-08 00:54:00.794888 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-08 00:54:00.794907 | orchestrator | Thursday 08 January 2026 00:51:28 +0000 (0:00:03.290) 0:01:04.331 ****** 2026-01-08 00:54:00.794924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.794980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.795120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.795152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.795197 | orchestrator | 2026-01-08 00:54:00.795218 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-08 00:54:00.795236 | orchestrator | Thursday 08 January 2026 00:51:33 +0000 (0:00:04.771) 0:01:09.103 ****** 2026-01-08 00:54:00.795254 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-08 00:54:00.795273 | orchestrator | 2026-01-08 00:54:00.795291 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-08 00:54:00.795317 | orchestrator | Thursday 08 January 2026 00:51:34 +0000 (0:00:00.643) 0:01:09.746 ****** 2026-01-08 00:54:00.795334 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.795351 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.795361 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.795380 | orchestrator | 2026-01-08 00:54:00.795390 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-08 00:54:00.795399 | orchestrator | Thursday 08 January 2026 00:51:35 +0000 (0:00:00.894) 0:01:10.641 ****** 2026-01-08 00:54:00.795409 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.795419 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.795428 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.795439 | orchestrator | 2026-01-08 00:54:00.795454 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-08 00:54:00.795471 | orchestrator | Thursday 08 January 2026 00:51:36 +0000 (0:00:01.746) 0:01:12.387 ****** 2026-01-08 00:54:00.795486 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.795502 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.795517 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.795531 | orchestrator | 2026-01-08 00:54:00.795548 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-08 00:54:00.795564 | orchestrator | Thursday 08 January 2026 00:51:38 +0000 (0:00:01.741) 0:01:14.128 ****** 2026-01-08 00:54:00.795611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.795773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.795794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.795804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.795815 | orchestrator | 2026-01-08 00:54:00.795825 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-08 00:54:00.795835 | orchestrator | Thursday 08 January 2026 00:51:43 +0000 (0:00:04.376) 0:01:18.505 ****** 2026-01-08 00:54:00.795845 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:54:00.795855 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.795865 | orchestrator | } 2026-01-08 00:54:00.795875 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:54:00.795885 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.795901 | orchestrator | } 2026-01-08 00:54:00.795911 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:54:00.795921 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.795931 | orchestrator | } 2026-01-08 00:54:00.795940 | orchestrator | 2026-01-08 00:54:00.795950 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:54:00.795960 | orchestrator | Thursday 08 January 2026 00:51:43 +0000 (0:00:00.360) 0:01:18.866 ****** 2026-01-08 00:54:00.795971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.795986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.795996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.796014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.796025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.796036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.796046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.796066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.796077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.796091 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.796102 | orchestrator | 2026-01-08 00:54:00.796112 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-08 00:54:00.796122 | orchestrator | Thursday 08 January 2026 00:51:46 +0000 (0:00:02.786) 0:01:21.652 ****** 2026-01-08 00:54:00.796132 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-08 00:54:00.796143 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-08 00:54:00.796152 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-08 00:54:00.796162 | orchestrator | 2026-01-08 00:54:00.796172 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-08 00:54:00.796182 | orchestrator | Thursday 08 January 2026 00:51:47 +0000 (0:00:00.926) 0:01:22.579 ****** 2026-01-08 00:54:00.796192 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:54:00.796202 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.796211 | orchestrator | } 2026-01-08 00:54:00.796221 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:54:00.796231 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.796241 | orchestrator | } 2026-01-08 00:54:00.796251 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:54:00.796260 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.796275 | orchestrator | } 2026-01-08 00:54:00.796285 | orchestrator | 2026-01-08 00:54:00.796295 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-08 00:54:00.796305 | orchestrator | Thursday 08 January 2026 00:51:47 +0000 (0:00:00.740) 0:01:23.319 ****** 2026-01-08 00:54:00.796315 | orchestrator | 2026-01-08 00:54:00.796324 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-08 00:54:00.796334 | orchestrator | Thursday 08 January 2026 00:51:47 +0000 (0:00:00.070) 0:01:23.389 ****** 2026-01-08 00:54:00.796344 | orchestrator | 2026-01-08 00:54:00.796353 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-08 00:54:00.796363 | orchestrator | Thursday 08 January 2026 00:51:48 +0000 (0:00:00.064) 0:01:23.454 ****** 2026-01-08 00:54:00.796372 | orchestrator | 2026-01-08 00:54:00.796382 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-08 00:54:00.796398 | orchestrator | Thursday 08 January 2026 00:51:48 +0000 (0:00:00.066) 0:01:23.520 ****** 2026-01-08 00:54:00.796408 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.796417 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.796427 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.796437 | orchestrator | 2026-01-08 00:54:00.796446 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-08 00:54:00.796456 | orchestrator | Thursday 08 January 2026 00:51:59 +0000 (0:00:10.976) 0:01:34.497 ****** 2026-01-08 00:54:00.796466 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.796475 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.796485 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.796495 | orchestrator | 2026-01-08 00:54:00.796504 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-08 00:54:00.796514 | orchestrator | Thursday 08 January 2026 00:52:12 +0000 (0:00:13.423) 0:01:47.921 ****** 2026-01-08 00:54:00.796524 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-08 00:54:00.796534 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-08 00:54:00.796543 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-08 00:54:00.796553 | orchestrator | 2026-01-08 00:54:00.796563 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-08 00:54:00.796572 | orchestrator | Thursday 08 January 2026 00:52:27 +0000 (0:00:14.493) 0:02:02.414 ****** 2026-01-08 00:54:00.796582 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.796592 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.796601 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.796611 | orchestrator | 2026-01-08 00:54:00.796621 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-08 00:54:00.796631 | orchestrator | Thursday 08 January 2026 00:52:41 +0000 (0:00:14.066) 0:02:16.481 ****** 2026-01-08 00:54:00.796641 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.796650 | orchestrator | 2026-01-08 00:54:00.796660 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-08 00:54:00.796670 | orchestrator | Thursday 08 January 2026 00:52:41 +0000 (0:00:00.123) 0:02:16.605 ****** 2026-01-08 00:54:00.796701 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.796713 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.796723 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.796732 | orchestrator | 2026-01-08 00:54:00.796742 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-08 00:54:00.796752 | orchestrator | Thursday 08 January 2026 00:52:41 +0000 (0:00:00.801) 0:02:17.406 ****** 2026-01-08 00:54:00.796761 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.796771 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.796781 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.796790 | orchestrator | 2026-01-08 00:54:00.796800 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-08 00:54:00.796810 | orchestrator | Thursday 08 January 2026 00:52:42 +0000 (0:00:00.642) 0:02:18.049 ****** 2026-01-08 00:54:00.796819 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.796829 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.796839 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.796848 | orchestrator | 2026-01-08 00:54:00.796858 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-08 00:54:00.796868 | orchestrator | Thursday 08 January 2026 00:52:43 +0000 (0:00:00.933) 0:02:18.982 ****** 2026-01-08 00:54:00.796884 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.796894 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.796904 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.796913 | orchestrator | 2026-01-08 00:54:00.796923 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-08 00:54:00.796934 | orchestrator | Thursday 08 January 2026 00:52:44 +0000 (0:00:00.675) 0:02:19.658 ****** 2026-01-08 00:54:00.796950 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.796978 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.796993 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.797007 | orchestrator | 2026-01-08 00:54:00.797024 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-08 00:54:00.797040 | orchestrator | Thursday 08 January 2026 00:52:45 +0000 (0:00:00.849) 0:02:20.508 ****** 2026-01-08 00:54:00.797057 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.797073 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.797090 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.797100 | orchestrator | 2026-01-08 00:54:00.797109 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-08 00:54:00.797119 | orchestrator | Thursday 08 January 2026 00:52:45 +0000 (0:00:00.801) 0:02:21.309 ****** 2026-01-08 00:54:00.797129 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-08 00:54:00.797138 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-08 00:54:00.797148 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-08 00:54:00.797158 | orchestrator | 2026-01-08 00:54:00.797168 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-08 00:54:00.797177 | orchestrator | Thursday 08 January 2026 00:52:46 +0000 (0:00:01.089) 0:02:22.398 ****** 2026-01-08 00:54:00.797187 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.797196 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.797206 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.797215 | orchestrator | 2026-01-08 00:54:00.797225 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-08 00:54:00.797245 | orchestrator | Thursday 08 January 2026 00:52:47 +0000 (0:00:00.350) 0:02:22.751 ****** 2026-01-08 00:54:00.797262 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797278 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797294 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797311 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797329 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797358 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797377 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.797514 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.797535 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.797555 | orchestrator | 2026-01-08 00:54:00.797565 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-08 00:54:00.797587 | orchestrator | Thursday 08 January 2026 00:52:50 +0000 (0:00:03.023) 0:02:25.774 ****** 2026-01-08 00:54:00.797603 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797627 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797645 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797673 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797811 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.797855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.797878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.797894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798149 | orchestrator | 2026-01-08 00:54:00.798166 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-08 00:54:00.798182 | orchestrator | Thursday 08 January 2026 00:52:57 +0000 (0:00:06.818) 0:02:32.592 ****** 2026-01-08 00:54:00.798201 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-08 00:54:00.798217 | orchestrator | 2026-01-08 00:54:00.798234 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-08 00:54:00.798244 | orchestrator | Thursday 08 January 2026 00:52:58 +0000 (0:00:00.882) 0:02:33.475 ****** 2026-01-08 00:54:00.798254 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.798265 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.798275 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.798284 | orchestrator | 2026-01-08 00:54:00.798294 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-08 00:54:00.798304 | orchestrator | Thursday 08 January 2026 00:52:58 +0000 (0:00:00.691) 0:02:34.167 ****** 2026-01-08 00:54:00.798313 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.798324 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.798333 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.798341 | orchestrator | 2026-01-08 00:54:00.798349 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-08 00:54:00.798357 | orchestrator | Thursday 08 January 2026 00:53:00 +0000 (0:00:01.721) 0:02:35.889 ****** 2026-01-08 00:54:00.798365 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.798373 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.798391 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.798399 | orchestrator | 2026-01-08 00:54:00.798410 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-08 00:54:00.798418 | orchestrator | Thursday 08 January 2026 00:53:02 +0000 (0:00:01.867) 0:02:37.756 ****** 2026-01-08 00:54:00.798427 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798436 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798452 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798460 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798495 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798557 | orchestrator | 2026-01-08 00:54:00.798565 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-08 00:54:00.798574 | orchestrator | Thursday 08 January 2026 00:53:07 +0000 (0:00:04.781) 0:02:42.537 ****** 2026-01-08 00:54:00.798582 | orchestrator | ok: [testbed-node-0] => { 2026-01-08 00:54:00.798590 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.798598 | orchestrator | } 2026-01-08 00:54:00.798606 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:54:00.798614 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.798622 | orchestrator | } 2026-01-08 00:54:00.798631 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:54:00.798638 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.798646 | orchestrator | } 2026-01-08 00:54:00.798654 | orchestrator | 2026-01-08 00:54:00.798662 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:54:00.798670 | orchestrator | Thursday 08 January 2026 00:53:07 +0000 (0:00:00.596) 0:02:43.134 ****** 2026-01-08 00:54:00.798710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-northd:2025.1', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2025.1', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:54:00.798815 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-1, testbed-node-2, testbed-node-0 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/ovn-sb-db-relay:2025.1', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 00:54:00.798823 | orchestrator | 2026-01-08 00:54:00.798831 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-08 00:54:00.798839 | orchestrator | Thursday 08 January 2026 00:53:10 +0000 (0:00:02.307) 0:02:45.441 ****** 2026-01-08 00:54:00.798847 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-08 00:54:00.798856 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-08 00:54:00.798864 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-08 00:54:00.798872 | orchestrator | 2026-01-08 00:54:00.798880 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-08 00:54:00.798888 | orchestrator | Thursday 08 January 2026 00:53:11 +0000 (0:00:01.418) 0:02:46.860 ****** 2026-01-08 00:54:00.798896 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:54:00.798904 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.798912 | orchestrator | } 2026-01-08 00:54:00.798920 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:54:00.798928 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.798936 | orchestrator | } 2026-01-08 00:54:00.798943 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:54:00.798957 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:54:00.798970 | orchestrator | } 2026-01-08 00:54:00.798983 | orchestrator | 2026-01-08 00:54:00.798996 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-08 00:54:00.799010 | orchestrator | Thursday 08 January 2026 00:53:12 +0000 (0:00:00.592) 0:02:47.452 ****** 2026-01-08 00:54:00.799023 | orchestrator | 2026-01-08 00:54:00.799036 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-08 00:54:00.799049 | orchestrator | Thursday 08 January 2026 00:53:12 +0000 (0:00:00.072) 0:02:47.524 ****** 2026-01-08 00:54:00.799058 | orchestrator | 2026-01-08 00:54:00.799066 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-08 00:54:00.799074 | orchestrator | Thursday 08 January 2026 00:53:12 +0000 (0:00:00.066) 0:02:47.590 ****** 2026-01-08 00:54:00.799082 | orchestrator | 2026-01-08 00:54:00.799090 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-08 00:54:00.799100 | orchestrator | Thursday 08 January 2026 00:53:12 +0000 (0:00:00.065) 0:02:47.656 ****** 2026-01-08 00:54:00.799117 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.799136 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.799148 | orchestrator | 2026-01-08 00:54:00.799161 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-08 00:54:00.799173 | orchestrator | Thursday 08 January 2026 00:53:27 +0000 (0:00:14.798) 0:03:02.455 ****** 2026-01-08 00:54:00.799185 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:54:00.799197 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:54:00.799209 | orchestrator | 2026-01-08 00:54:00.799220 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-08 00:54:00.799240 | orchestrator | Thursday 08 January 2026 00:53:39 +0000 (0:00:12.316) 0:03:14.772 ****** 2026-01-08 00:54:00.799263 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-08 00:54:00.799276 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-08 00:54:00.799288 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-08 00:54:00.799301 | orchestrator | 2026-01-08 00:54:00.799315 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-08 00:54:00.799329 | orchestrator | Thursday 08 January 2026 00:53:52 +0000 (0:00:13.298) 0:03:28.071 ****** 2026-01-08 00:54:00.799341 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:54:00.799353 | orchestrator | 2026-01-08 00:54:00.799366 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-08 00:54:00.799374 | orchestrator | Thursday 08 January 2026 00:53:52 +0000 (0:00:00.132) 0:03:28.203 ****** 2026-01-08 00:54:00.799382 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.799390 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.799398 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.799406 | orchestrator | 2026-01-08 00:54:00.799414 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-08 00:54:00.799422 | orchestrator | Thursday 08 January 2026 00:53:53 +0000 (0:00:00.845) 0:03:29.048 ****** 2026-01-08 00:54:00.799506 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.799517 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.799525 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.799533 | orchestrator | 2026-01-08 00:54:00.799541 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-08 00:54:00.799549 | orchestrator | Thursday 08 January 2026 00:53:54 +0000 (0:00:00.697) 0:03:29.746 ****** 2026-01-08 00:54:00.799557 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.799565 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.799573 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.799580 | orchestrator | 2026-01-08 00:54:00.799588 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-08 00:54:00.799606 | orchestrator | Thursday 08 January 2026 00:53:55 +0000 (0:00:01.173) 0:03:30.919 ****** 2026-01-08 00:54:00.799615 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:54:00.799623 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:54:00.799631 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:54:00.799639 | orchestrator | 2026-01-08 00:54:00.799647 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-08 00:54:00.799655 | orchestrator | Thursday 08 January 2026 00:53:56 +0000 (0:00:00.688) 0:03:31.607 ****** 2026-01-08 00:54:00.799663 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.799671 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.799704 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.799718 | orchestrator | 2026-01-08 00:54:00.799730 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-08 00:54:00.799738 | orchestrator | Thursday 08 January 2026 00:53:57 +0000 (0:00:00.849) 0:03:32.457 ****** 2026-01-08 00:54:00.799746 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:54:00.799754 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:54:00.799762 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:54:00.799770 | orchestrator | 2026-01-08 00:54:00.799778 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-08 00:54:00.799786 | orchestrator | Thursday 08 January 2026 00:53:57 +0000 (0:00:00.809) 0:03:33.267 ****** 2026-01-08 00:54:00.799794 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-08 00:54:00.799802 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-08 00:54:00.799810 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-08 00:54:00.799823 | orchestrator | 2026-01-08 00:54:00.799834 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:54:00.799851 | orchestrator | testbed-node-0 : ok=65  changed=29  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-08 00:54:00.799865 | orchestrator | testbed-node-1 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-08 00:54:00.799894 | orchestrator | testbed-node-2 : ok=63  changed=30  unreachable=0 failed=0 skipped=23  rescued=0 ignored=0 2026-01-08 00:54:00.799908 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:54:00.799921 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:54:00.799934 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 00:54:00.799946 | orchestrator | 2026-01-08 00:54:00.799958 | orchestrator | 2026-01-08 00:54:00.799972 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:54:00.799986 | orchestrator | Thursday 08 January 2026 00:53:59 +0000 (0:00:01.202) 0:03:34.470 ****** 2026-01-08 00:54:00.799999 | orchestrator | =============================================================================== 2026-01-08 00:54:00.800012 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 27.79s 2026-01-08 00:54:00.800025 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 25.78s 2026-01-08 00:54:00.800039 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 25.74s 2026-01-08 00:54:00.800053 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.10s 2026-01-08 00:54:00.800063 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.07s 2026-01-08 00:54:00.800072 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.82s 2026-01-08 00:54:00.800094 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.82s 2026-01-08 00:54:00.800108 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.78s 2026-01-08 00:54:00.800121 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.77s 2026-01-08 00:54:00.800134 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.38s 2026-01-08 00:54:00.800147 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.29s 2026-01-08 00:54:00.800161 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.02s 2026-01-08 00:54:00.800174 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.90s 2026-01-08 00:54:00.800187 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.79s 2026-01-08 00:54:00.800200 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.31s 2026-01-08 00:54:00.800213 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.08s 2026-01-08 00:54:00.800227 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.06s 2026-01-08 00:54:00.800239 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.00s 2026-01-08 00:54:00.800253 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.96s 2026-01-08 00:54:00.800262 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.87s 2026-01-08 00:54:00.800271 | orchestrator | 2026-01-08 00:54:00 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:00.800279 | orchestrator | 2026-01-08 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:03.842862 | orchestrator | 2026-01-08 00:54:03 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:03.845334 | orchestrator | 2026-01-08 00:54:03 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:03.845476 | orchestrator | 2026-01-08 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:06.887663 | orchestrator | 2026-01-08 00:54:06 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:06.889248 | orchestrator | 2026-01-08 00:54:06 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:06.889490 | orchestrator | 2026-01-08 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:09.937222 | orchestrator | 2026-01-08 00:54:09 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:09.938838 | orchestrator | 2026-01-08 00:54:09 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:09.939213 | orchestrator | 2026-01-08 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:12.993439 | orchestrator | 2026-01-08 00:54:12 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:12.995271 | orchestrator | 2026-01-08 00:54:12 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:12.995336 | orchestrator | 2026-01-08 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:16.040526 | orchestrator | 2026-01-08 00:54:16 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:16.042856 | orchestrator | 2026-01-08 00:54:16 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:16.042911 | orchestrator | 2026-01-08 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:19.095158 | orchestrator | 2026-01-08 00:54:19 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:19.097299 | orchestrator | 2026-01-08 00:54:19 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:19.097655 | orchestrator | 2026-01-08 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:22.152271 | orchestrator | 2026-01-08 00:54:22 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:22.152327 | orchestrator | 2026-01-08 00:54:22 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:22.152336 | orchestrator | 2026-01-08 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:25.196936 | orchestrator | 2026-01-08 00:54:25 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:25.198431 | orchestrator | 2026-01-08 00:54:25 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:25.198729 | orchestrator | 2026-01-08 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:28.243964 | orchestrator | 2026-01-08 00:54:28 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:28.245778 | orchestrator | 2026-01-08 00:54:28 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:28.245873 | orchestrator | 2026-01-08 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:31.296332 | orchestrator | 2026-01-08 00:54:31 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:31.297809 | orchestrator | 2026-01-08 00:54:31 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:31.298172 | orchestrator | 2026-01-08 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:34.341409 | orchestrator | 2026-01-08 00:54:34 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:34.347260 | orchestrator | 2026-01-08 00:54:34 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:34.348277 | orchestrator | 2026-01-08 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:37.394106 | orchestrator | 2026-01-08 00:54:37 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:37.396979 | orchestrator | 2026-01-08 00:54:37 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:37.397060 | orchestrator | 2026-01-08 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:40.438386 | orchestrator | 2026-01-08 00:54:40 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:40.440809 | orchestrator | 2026-01-08 00:54:40 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:40.440887 | orchestrator | 2026-01-08 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:43.491561 | orchestrator | 2026-01-08 00:54:43 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:43.493730 | orchestrator | 2026-01-08 00:54:43 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:43.493921 | orchestrator | 2026-01-08 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:46.541334 | orchestrator | 2026-01-08 00:54:46 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:46.544706 | orchestrator | 2026-01-08 00:54:46 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:46.544774 | orchestrator | 2026-01-08 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:49.582494 | orchestrator | 2026-01-08 00:54:49 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:49.585664 | orchestrator | 2026-01-08 00:54:49 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:49.585980 | orchestrator | 2026-01-08 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:52.642230 | orchestrator | 2026-01-08 00:54:52 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:52.642288 | orchestrator | 2026-01-08 00:54:52 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:52.642297 | orchestrator | 2026-01-08 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:55.680258 | orchestrator | 2026-01-08 00:54:55 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:55.680764 | orchestrator | 2026-01-08 00:54:55 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:55.680789 | orchestrator | 2026-01-08 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:54:58.720752 | orchestrator | 2026-01-08 00:54:58 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:54:58.722313 | orchestrator | 2026-01-08 00:54:58 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:54:58.722360 | orchestrator | 2026-01-08 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:01.766500 | orchestrator | 2026-01-08 00:55:01 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:01.769116 | orchestrator | 2026-01-08 00:55:01 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:01.769362 | orchestrator | 2026-01-08 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:04.813150 | orchestrator | 2026-01-08 00:55:04 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:04.818912 | orchestrator | 2026-01-08 00:55:04 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:04.818981 | orchestrator | 2026-01-08 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:07.856190 | orchestrator | 2026-01-08 00:55:07 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:07.856862 | orchestrator | 2026-01-08 00:55:07 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:07.856915 | orchestrator | 2026-01-08 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:10.888206 | orchestrator | 2026-01-08 00:55:10 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:10.888984 | orchestrator | 2026-01-08 00:55:10 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:10.889025 | orchestrator | 2026-01-08 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:13.938181 | orchestrator | 2026-01-08 00:55:13 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:13.938248 | orchestrator | 2026-01-08 00:55:13 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:13.938256 | orchestrator | 2026-01-08 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:16.980170 | orchestrator | 2026-01-08 00:55:16 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:16.982808 | orchestrator | 2026-01-08 00:55:16 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:16.982863 | orchestrator | 2026-01-08 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:20.027428 | orchestrator | 2026-01-08 00:55:20 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:20.028136 | orchestrator | 2026-01-08 00:55:20 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:20.028182 | orchestrator | 2026-01-08 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:23.075418 | orchestrator | 2026-01-08 00:55:23 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:23.077072 | orchestrator | 2026-01-08 00:55:23 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:23.077127 | orchestrator | 2026-01-08 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:26.121904 | orchestrator | 2026-01-08 00:55:26 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:26.123770 | orchestrator | 2026-01-08 00:55:26 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:26.123831 | orchestrator | 2026-01-08 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:29.190049 | orchestrator | 2026-01-08 00:55:29 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:29.192143 | orchestrator | 2026-01-08 00:55:29 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:29.192300 | orchestrator | 2026-01-08 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:32.240333 | orchestrator | 2026-01-08 00:55:32 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:32.242699 | orchestrator | 2026-01-08 00:55:32 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:32.243234 | orchestrator | 2026-01-08 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:35.289220 | orchestrator | 2026-01-08 00:55:35 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:35.290369 | orchestrator | 2026-01-08 00:55:35 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:35.290514 | orchestrator | 2026-01-08 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:38.346536 | orchestrator | 2026-01-08 00:55:38 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:38.348353 | orchestrator | 2026-01-08 00:55:38 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:38.348844 | orchestrator | 2026-01-08 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:41.386785 | orchestrator | 2026-01-08 00:55:41 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:41.387626 | orchestrator | 2026-01-08 00:55:41 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:41.387667 | orchestrator | 2026-01-08 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:44.436053 | orchestrator | 2026-01-08 00:55:44 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:44.437305 | orchestrator | 2026-01-08 00:55:44 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:44.437375 | orchestrator | 2026-01-08 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:47.482284 | orchestrator | 2026-01-08 00:55:47 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:47.483665 | orchestrator | 2026-01-08 00:55:47 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:47.483703 | orchestrator | 2026-01-08 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:50.529997 | orchestrator | 2026-01-08 00:55:50 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:50.532920 | orchestrator | 2026-01-08 00:55:50 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:50.532960 | orchestrator | 2026-01-08 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:53.577591 | orchestrator | 2026-01-08 00:55:53 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state STARTED 2026-01-08 00:55:53.577727 | orchestrator | 2026-01-08 00:55:53 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:53.577746 | orchestrator | 2026-01-08 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:56.635720 | orchestrator | 2026-01-08 00:55:56 | INFO  | Task f42c6111-f97f-4f5d-a0d8-b352846ab483 is in state SUCCESS 2026-01-08 00:55:56.637927 | orchestrator | 2026-01-08 00:55:56.637968 | orchestrator | 2026-01-08 00:55:56.637974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:55:56.637980 | orchestrator | 2026-01-08 00:55:56.637985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 00:55:56.637990 | orchestrator | Thursday 08 January 2026 00:49:08 +0000 (0:00:00.376) 0:00:00.376 ****** 2026-01-08 00:55:56.637995 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.638001 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.638006 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.638047 | orchestrator | 2026-01-08 00:55:56.638051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 00:55:56.638056 | orchestrator | Thursday 08 January 2026 00:49:08 +0000 (0:00:00.348) 0:00:00.725 ****** 2026-01-08 00:55:56.638062 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-08 00:55:56.638084 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-08 00:55:56.638088 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-08 00:55:56.638092 | orchestrator | 2026-01-08 00:55:56.638096 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-08 00:55:56.638099 | orchestrator | 2026-01-08 00:55:56.638103 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-08 00:55:56.638107 | orchestrator | Thursday 08 January 2026 00:49:08 +0000 (0:00:00.534) 0:00:01.260 ****** 2026-01-08 00:55:56.638112 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.638118 | orchestrator | 2026-01-08 00:55:56.638124 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-08 00:55:56.638201 | orchestrator | Thursday 08 January 2026 00:49:09 +0000 (0:00:00.847) 0:00:02.107 ****** 2026-01-08 00:55:56.638208 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.638215 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.638221 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.638227 | orchestrator | 2026-01-08 00:55:56.638301 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-08 00:55:56.638311 | orchestrator | Thursday 08 January 2026 00:49:10 +0000 (0:00:01.031) 0:00:03.139 ****** 2026-01-08 00:55:56.638318 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.638325 | orchestrator | 2026-01-08 00:55:56.638568 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-08 00:55:56.638583 | orchestrator | Thursday 08 January 2026 00:49:11 +0000 (0:00:00.847) 0:00:03.987 ****** 2026-01-08 00:55:56.638589 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.638593 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.638596 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.638600 | orchestrator | 2026-01-08 00:55:56.638604 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-08 00:55:56.638609 | orchestrator | Thursday 08 January 2026 00:49:12 +0000 (0:00:00.942) 0:00:04.929 ****** 2026-01-08 00:55:56.638613 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-08 00:55:56.638617 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-08 00:55:56.638621 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-08 00:55:56.638625 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-08 00:55:56.638629 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-08 00:55:56.638643 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-08 00:55:56.638648 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-08 00:55:56.638652 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-08 00:55:56.638656 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-08 00:55:56.638660 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-08 00:55:56.638663 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-08 00:55:56.638667 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-08 00:55:56.638671 | orchestrator | 2026-01-08 00:55:56.638675 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-08 00:55:56.638679 | orchestrator | Thursday 08 January 2026 00:49:16 +0000 (0:00:04.037) 0:00:08.966 ****** 2026-01-08 00:55:56.638683 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-08 00:55:56.638687 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-08 00:55:56.638699 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-08 00:55:56.638703 | orchestrator | 2026-01-08 00:55:56.638707 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-08 00:55:56.638711 | orchestrator | Thursday 08 January 2026 00:49:17 +0000 (0:00:00.911) 0:00:09.878 ****** 2026-01-08 00:55:56.638714 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-08 00:55:56.638718 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-08 00:55:56.638722 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-08 00:55:56.638726 | orchestrator | 2026-01-08 00:55:56.638730 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-08 00:55:56.638733 | orchestrator | Thursday 08 January 2026 00:49:19 +0000 (0:00:01.494) 0:00:11.373 ****** 2026-01-08 00:55:56.638737 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-08 00:55:56.638741 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.638754 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-08 00:55:56.638758 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.638762 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-08 00:55:56.638766 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.638769 | orchestrator | 2026-01-08 00:55:56.638773 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-08 00:55:56.638777 | orchestrator | Thursday 08 January 2026 00:49:19 +0000 (0:00:00.733) 0:00:12.106 ****** 2026-01-08 00:55:56.638782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.638791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.638795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.638802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.638810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.638819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.638824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.638828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.638832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.638836 | orchestrator | 2026-01-08 00:55:56.638840 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-08 00:55:56.638844 | orchestrator | Thursday 08 January 2026 00:49:22 +0000 (0:00:02.553) 0:00:14.659 ****** 2026-01-08 00:55:56.638848 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.638852 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.638856 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.638860 | orchestrator | 2026-01-08 00:55:56.638863 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-08 00:55:56.638867 | orchestrator | Thursday 08 January 2026 00:49:24 +0000 (0:00:01.706) 0:00:16.366 ****** 2026-01-08 00:55:56.638871 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-08 00:55:56.638875 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-08 00:55:56.638879 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-08 00:55:56.638882 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-08 00:55:56.638889 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-08 00:55:56.638893 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-08 00:55:56.638897 | orchestrator | 2026-01-08 00:55:56.638901 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-08 00:55:56.638905 | orchestrator | Thursday 08 January 2026 00:49:26 +0000 (0:00:02.413) 0:00:18.780 ****** 2026-01-08 00:55:56.638909 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.638912 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.638919 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.638923 | orchestrator | 2026-01-08 00:55:56.638926 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-08 00:55:56.638930 | orchestrator | Thursday 08 January 2026 00:49:29 +0000 (0:00:03.049) 0:00:21.829 ****** 2026-01-08 00:55:56.638934 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.638938 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.638942 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.638945 | orchestrator | 2026-01-08 00:55:56.638949 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-08 00:55:56.638954 | orchestrator | Thursday 08 January 2026 00:49:31 +0000 (0:00:02.283) 0:00:24.112 ****** 2026-01-08 00:55:56.638961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.638973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.639303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.639316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-08 00:55:56.639320 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.639325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.639341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.639345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.639349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-08 00:55:56.639353 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.639440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.639447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.639451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.639461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-08 00:55:56.639465 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.639469 | orchestrator | 2026-01-08 00:55:56.639473 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-08 00:55:56.639477 | orchestrator | Thursday 08 January 2026 00:49:32 +0000 (0:00:00.719) 0:00:24.832 ****** 2026-01-08 00:55:56.639485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.639524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-08 00:55:56.639531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.639539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-08 00:55:56.639553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.639567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2025.1', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f', '__omit_place_holder__a236428325531fa80f8f16837a440cf4bf569d9f'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-08 00:55:56.639571 | orchestrator | 2026-01-08 00:55:56.639575 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-08 00:55:56.639579 | orchestrator | Thursday 08 January 2026 00:49:36 +0000 (0:00:03.780) 0:00:28.612 ****** 2026-01-08 00:55:56.639583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.639632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.639639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.639643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.639647 | orchestrator | 2026-01-08 00:55:56.639651 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-08 00:55:56.639655 | orchestrator | Thursday 08 January 2026 00:49:39 +0000 (0:00:03.567) 0:00:32.180 ****** 2026-01-08 00:55:56.639659 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-08 00:55:56.639663 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-08 00:55:56.639667 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-08 00:55:56.639671 | orchestrator | 2026-01-08 00:55:56.639674 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-08 00:55:56.639678 | orchestrator | Thursday 08 January 2026 00:49:42 +0000 (0:00:02.866) 0:00:35.047 ****** 2026-01-08 00:55:56.639682 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-08 00:55:56.639686 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-08 00:55:56.639690 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-08 00:55:56.639694 | orchestrator | 2026-01-08 00:55:56.639701 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-08 00:55:56.639708 | orchestrator | Thursday 08 January 2026 00:49:49 +0000 (0:00:06.810) 0:00:41.857 ****** 2026-01-08 00:55:56.639713 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.639719 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.639725 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.639731 | orchestrator | 2026-01-08 00:55:56.639738 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-08 00:55:56.639744 | orchestrator | Thursday 08 January 2026 00:49:50 +0000 (0:00:00.851) 0:00:42.709 ****** 2026-01-08 00:55:56.639750 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-08 00:55:56.639828 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-08 00:55:56.639836 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-08 00:55:56.639842 | orchestrator | 2026-01-08 00:55:56.639849 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-08 00:55:56.639854 | orchestrator | Thursday 08 January 2026 00:49:52 +0000 (0:00:02.471) 0:00:45.180 ****** 2026-01-08 00:55:56.639858 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-08 00:55:56.639861 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-08 00:55:56.639866 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-08 00:55:56.639873 | orchestrator | 2026-01-08 00:55:56.639879 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-08 00:55:56.639885 | orchestrator | Thursday 08 January 2026 00:49:55 +0000 (0:00:02.510) 0:00:47.690 ****** 2026-01-08 00:55:56.639891 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.640203 | orchestrator | 2026-01-08 00:55:56.640208 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-08 00:55:56.640212 | orchestrator | Thursday 08 January 2026 00:49:55 +0000 (0:00:00.518) 0:00:48.209 ****** 2026-01-08 00:55:56.640216 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-08 00:55:56.640220 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-08 00:55:56.640224 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-08 00:55:56.640227 | orchestrator | 2026-01-08 00:55:56.640231 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-08 00:55:56.640235 | orchestrator | Thursday 08 January 2026 00:49:58 +0000 (0:00:02.271) 0:00:50.480 ****** 2026-01-08 00:55:56.640239 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-08 00:55:56.640243 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-08 00:55:56.640247 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-08 00:55:56.640250 | orchestrator | 2026-01-08 00:55:56.640254 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-01-08 00:55:56.640258 | orchestrator | Thursday 08 January 2026 00:50:00 +0000 (0:00:02.406) 0:00:52.887 ****** 2026-01-08 00:55:56.640262 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.640266 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.640270 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.640273 | orchestrator | 2026-01-08 00:55:56.640277 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-01-08 00:55:56.640281 | orchestrator | Thursday 08 January 2026 00:50:00 +0000 (0:00:00.339) 0:00:53.226 ****** 2026-01-08 00:55:56.640285 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.640289 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.640293 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.640304 | orchestrator | 2026-01-08 00:55:56.640308 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-08 00:55:56.640311 | orchestrator | Thursday 08 January 2026 00:50:01 +0000 (0:00:00.420) 0:00:53.646 ****** 2026-01-08 00:55:56.640316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.640399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.640412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.640416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.640420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.640427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.640441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.640447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.640471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.640478 | orchestrator | 2026-01-08 00:55:56.640484 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-08 00:55:56.640489 | orchestrator | Thursday 08 January 2026 00:50:04 +0000 (0:00:03.342) 0:00:56.989 ****** 2026-01-08 00:55:56.640496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.640501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.640507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.640514 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.640524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.641219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.641228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.641233 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.641266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.641495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.641501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.641506 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.641510 | orchestrator | 2026-01-08 00:55:56.641514 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-08 00:55:56.641518 | orchestrator | Thursday 08 January 2026 00:50:05 +0000 (0:00:01.215) 0:00:58.205 ****** 2026-01-08 00:55:56.641522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.641536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.641541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.641545 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.641615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.641622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.641626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.641631 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.641635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.641647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.641651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.641655 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.641659 | orchestrator | 2026-01-08 00:55:56.641673 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-08 00:55:56.641677 | orchestrator | Thursday 08 January 2026 00:50:07 +0000 (0:00:01.557) 0:00:59.762 ****** 2026-01-08 00:55:56.641681 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-08 00:55:56.641686 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-08 00:55:56.641690 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-08 00:55:56.641694 | orchestrator | 2026-01-08 00:55:56.641697 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-08 00:55:56.641701 | orchestrator | Thursday 08 January 2026 00:50:09 +0000 (0:00:01.705) 0:01:01.467 ****** 2026-01-08 00:55:56.641705 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-08 00:55:56.641721 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-08 00:55:56.641725 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-08 00:55:56.641729 | orchestrator | 2026-01-08 00:55:56.641733 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-08 00:55:56.641737 | orchestrator | Thursday 08 January 2026 00:50:11 +0000 (0:00:01.882) 0:01:03.349 ****** 2026-01-08 00:55:56.641740 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-08 00:55:56.641744 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-08 00:55:56.641830 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-08 00:55:56.641834 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-08 00:55:56.641838 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.641842 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-08 00:55:56.641846 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.641850 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-08 00:55:56.641859 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.641862 | orchestrator | 2026-01-08 00:55:56.641866 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-08 00:55:56.641870 | orchestrator | Thursday 08 January 2026 00:50:12 +0000 (0:00:01.907) 0:01:05.257 ****** 2026-01-08 00:55:56.641874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.641878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.641886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.641890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.641907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.641911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.641920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.641924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.641928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.641931 | orchestrator | 2026-01-08 00:55:56.641935 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-08 00:55:56.641942 | orchestrator | Thursday 08 January 2026 00:50:15 +0000 (0:00:02.337) 0:01:07.594 ****** 2026-01-08 00:55:56.641946 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:55:56.641950 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:55:56.641953 | orchestrator | } 2026-01-08 00:55:56.641957 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:55:56.641961 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:55:56.642775 | orchestrator | } 2026-01-08 00:55:56.642817 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:55:56.642823 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:55:56.642828 | orchestrator | } 2026-01-08 00:55:56.642832 | orchestrator | 2026-01-08 00:55:56.642836 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:55:56.642841 | orchestrator | Thursday 08 January 2026 00:50:15 +0000 (0:00:00.359) 0:01:07.954 ****** 2026-01-08 00:55:56.642846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.643006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.643029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.643036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.643043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.643049 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.643064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.643071 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.643077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.643084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.643278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.643299 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.643304 | orchestrator | 2026-01-08 00:55:56.643308 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-08 00:55:56.643312 | orchestrator | Thursday 08 January 2026 00:50:16 +0000 (0:00:01.273) 0:01:09.227 ****** 2026-01-08 00:55:56.643316 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.643320 | orchestrator | 2026-01-08 00:55:56.643324 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-08 00:55:56.643327 | orchestrator | Thursday 08 January 2026 00:50:17 +0000 (0:00:00.677) 0:01:09.905 ****** 2026-01-08 00:55:56.643333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.643339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.643347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.643475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.643482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.643498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.643538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643551 | orchestrator | 2026-01-08 00:55:56.643556 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-08 00:55:56.643560 | orchestrator | Thursday 08 January 2026 00:50:22 +0000 (0:00:04.580) 0:01:14.486 ****** 2026-01-08 00:55:56.643565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.643569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.643576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643588 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.643637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.643643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.643647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.643724 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.643732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2025.1', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.643740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2025.1', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.644070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2025.1', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2025.1', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644095 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.644099 | orchestrator | 2026-01-08 00:55:56.644103 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-08 00:55:56.644107 | orchestrator | Thursday 08 January 2026 00:50:22 +0000 (0:00:00.830) 0:01:15.317 ****** 2026-01-08 00:55:56.644112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.644117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.644122 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.644126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.644134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.644138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.644145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.644170 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.644174 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.644178 | orchestrator | 2026-01-08 00:55:56.644186 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-08 00:55:56.644190 | orchestrator | Thursday 08 January 2026 00:50:23 +0000 (0:00:00.961) 0:01:16.278 ****** 2026-01-08 00:55:56.644194 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.644198 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.644202 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.644206 | orchestrator | 2026-01-08 00:55:56.644209 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-08 00:55:56.644213 | orchestrator | Thursday 08 January 2026 00:50:25 +0000 (0:00:01.492) 0:01:17.770 ****** 2026-01-08 00:55:56.644217 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.644221 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.644225 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.644229 | orchestrator | 2026-01-08 00:55:56.644232 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-08 00:55:56.644236 | orchestrator | Thursday 08 January 2026 00:50:27 +0000 (0:00:02.019) 0:01:19.790 ****** 2026-01-08 00:55:56.644240 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.644754 | orchestrator | 2026-01-08 00:55:56.644760 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-08 00:55:56.644764 | orchestrator | Thursday 08 January 2026 00:50:28 +0000 (0:00:00.856) 0:01:20.647 ****** 2026-01-08 00:55:56.644825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.644836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.644862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.644911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644922 | orchestrator | 2026-01-08 00:55:56.644927 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-08 00:55:56.644934 | orchestrator | Thursday 08 January 2026 00:50:32 +0000 (0:00:04.281) 0:01:24.928 ****** 2026-01-08 00:55:56.644938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.644961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644969 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.644974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.644981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.644992 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.645025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.645032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.645036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.645044 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.645048 | orchestrator | 2026-01-08 00:55:56.645052 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-08 00:55:56.645056 | orchestrator | Thursday 08 January 2026 00:50:33 +0000 (0:00:01.098) 0:01:26.027 ****** 2026-01-08 00:55:56.645064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.645068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.645073 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.645077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.645085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.645089 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.645093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.645097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.645144 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.645149 | orchestrator | 2026-01-08 00:55:56.645543 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-08 00:55:56.645553 | orchestrator | Thursday 08 January 2026 00:50:35 +0000 (0:00:01.749) 0:01:27.777 ****** 2026-01-08 00:55:56.645557 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.645562 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.645566 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.645570 | orchestrator | 2026-01-08 00:55:56.645574 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-08 00:55:56.645578 | orchestrator | Thursday 08 January 2026 00:50:36 +0000 (0:00:01.309) 0:01:29.086 ****** 2026-01-08 00:55:56.645582 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.645586 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.645603 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.645607 | orchestrator | 2026-01-08 00:55:56.645610 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-08 00:55:56.645614 | orchestrator | Thursday 08 January 2026 00:50:39 +0000 (0:00:02.296) 0:01:31.382 ****** 2026-01-08 00:55:56.645619 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.645622 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.645626 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.645630 | orchestrator | 2026-01-08 00:55:56.645675 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-08 00:55:56.645680 | orchestrator | Thursday 08 January 2026 00:50:39 +0000 (0:00:00.409) 0:01:31.792 ****** 2026-01-08 00:55:56.645744 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.645748 | orchestrator | 2026-01-08 00:55:56.645752 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-08 00:55:56.645763 | orchestrator | Thursday 08 January 2026 00:50:40 +0000 (0:00:00.913) 0:01:32.706 ****** 2026-01-08 00:55:56.645767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-08 00:55:56.645773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-08 00:55:56.645779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-01-08 00:55:56.645784 | orchestrator | 2026-01-08 00:55:56.645788 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-08 00:55:56.645792 | orchestrator | Thursday 08 January 2026 00:50:45 +0000 (0:00:04.816) 0:01:37.523 ****** 2026-01-08 00:55:56.645796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-08 00:55:56.645800 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.646297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-08 00:55:56.646325 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.646329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-01-08 00:55:56.646333 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.646337 | orchestrator | 2026-01-08 00:55:56.646341 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-08 00:55:56.646345 | orchestrator | Thursday 08 January 2026 00:50:46 +0000 (0:00:01.731) 0:01:39.255 ****** 2026-01-08 00:55:56.646350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-08 00:55:56.646358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-08 00:55:56.646364 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.646367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-08 00:55:56.646415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-08 00:55:56.647194 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.647213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-08 00:55:56.647234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-01-08 00:55:56.647239 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.647243 | orchestrator | 2026-01-08 00:55:56.647247 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-08 00:55:56.647251 | orchestrator | Thursday 08 January 2026 00:50:48 +0000 (0:00:01.982) 0:01:41.237 ****** 2026-01-08 00:55:56.647255 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.647259 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.647262 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.647266 | orchestrator | 2026-01-08 00:55:56.647270 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-08 00:55:56.647274 | orchestrator | Thursday 08 January 2026 00:50:49 +0000 (0:00:00.402) 0:01:41.640 ****** 2026-01-08 00:55:56.647278 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.647281 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.647291 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.647295 | orchestrator | 2026-01-08 00:55:56.647298 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-08 00:55:56.647302 | orchestrator | Thursday 08 January 2026 00:50:50 +0000 (0:00:01.207) 0:01:42.847 ****** 2026-01-08 00:55:56.647306 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.647310 | orchestrator | 2026-01-08 00:55:56.647314 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-08 00:55:56.647318 | orchestrator | Thursday 08 January 2026 00:50:51 +0000 (0:00:01.045) 0:01:43.892 ****** 2026-01-08 00:55:56.647323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.647331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.647360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.647511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647524 | orchestrator | 2026-01-08 00:55:56.647528 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-08 00:55:56.647537 | orchestrator | Thursday 08 January 2026 00:50:55 +0000 (0:00:03.806) 0:01:47.699 ****** 2026-01-08 00:55:56.647545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.647551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647567 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.647583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.647589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647608 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.647612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.647616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647632 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.647636 | orchestrator | 2026-01-08 00:55:56.647640 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-08 00:55:56.647644 | orchestrator | Thursday 08 January 2026 00:50:56 +0000 (0:00:01.166) 0:01:48.865 ****** 2026-01-08 00:55:56.647657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.647671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.647676 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.647680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.647684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.647687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.647691 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.647695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.647699 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.647703 | orchestrator | 2026-01-08 00:55:56.647707 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-08 00:55:56.647710 | orchestrator | Thursday 08 January 2026 00:50:57 +0000 (0:00:00.911) 0:01:49.777 ****** 2026-01-08 00:55:56.647714 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.647718 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.647722 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.647726 | orchestrator | 2026-01-08 00:55:56.647729 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-08 00:55:56.647733 | orchestrator | Thursday 08 January 2026 00:50:58 +0000 (0:00:01.249) 0:01:51.026 ****** 2026-01-08 00:55:56.647740 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.647744 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.647747 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.647751 | orchestrator | 2026-01-08 00:55:56.647755 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-08 00:55:56.647759 | orchestrator | Thursday 08 January 2026 00:51:00 +0000 (0:00:01.843) 0:01:52.869 ****** 2026-01-08 00:55:56.647763 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.647766 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.647770 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.647774 | orchestrator | 2026-01-08 00:55:56.647778 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-08 00:55:56.647781 | orchestrator | Thursday 08 January 2026 00:51:00 +0000 (0:00:00.281) 0:01:53.151 ****** 2026-01-08 00:55:56.647785 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.647789 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.647793 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.647797 | orchestrator | 2026-01-08 00:55:56.647800 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-08 00:55:56.647804 | orchestrator | Thursday 08 January 2026 00:51:01 +0000 (0:00:00.309) 0:01:53.460 ****** 2026-01-08 00:55:56.647808 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.647812 | orchestrator | 2026-01-08 00:55:56.647818 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-08 00:55:56.647822 | orchestrator | Thursday 08 January 2026 00:51:02 +0000 (0:00:01.116) 0:01:54.577 ****** 2026-01-08 00:55:56.647826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.647833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 00:55:56.647837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.647872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 00:55:56.647878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.647910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 00:55:56.647914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647941 | orchestrator | 2026-01-08 00:55:56.647945 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-08 00:55:56.647949 | orchestrator | Thursday 08 January 2026 00:51:06 +0000 (0:00:03.926) 0:01:58.503 ****** 2026-01-08 00:55:56.647953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.647957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 00:55:56.647963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.647984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 00:55:56.647994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.647998 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.648002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648028 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.648034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.648038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 00:55:56.648042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2025.1', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.648067 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.648070 | orchestrator | 2026-01-08 00:55:56.648074 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-08 00:55:56.648078 | orchestrator | Thursday 08 January 2026 00:51:07 +0000 (0:00:00.994) 0:01:59.498 ****** 2026-01-08 00:55:56.648085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.648089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.648094 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.648098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.648102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.648110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.648114 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.648118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.648122 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.648126 | orchestrator | 2026-01-08 00:55:56.648132 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-08 00:55:56.648136 | orchestrator | Thursday 08 January 2026 00:51:08 +0000 (0:00:01.281) 0:02:00.780 ****** 2026-01-08 00:55:56.648140 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.648144 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.648148 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.648151 | orchestrator | 2026-01-08 00:55:56.648155 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-08 00:55:56.648159 | orchestrator | Thursday 08 January 2026 00:51:09 +0000 (0:00:01.391) 0:02:02.171 ****** 2026-01-08 00:55:56.648163 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.648167 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.648171 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.648174 | orchestrator | 2026-01-08 00:55:56.648178 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-08 00:55:56.648182 | orchestrator | Thursday 08 January 2026 00:51:12 +0000 (0:00:02.287) 0:02:04.459 ****** 2026-01-08 00:55:56.648186 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.648190 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.648194 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.648197 | orchestrator | 2026-01-08 00:55:56.648201 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-08 00:55:56.648205 | orchestrator | Thursday 08 January 2026 00:51:12 +0000 (0:00:00.347) 0:02:04.806 ****** 2026-01-08 00:55:56.648209 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.648213 | orchestrator | 2026-01-08 00:55:56.648216 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-08 00:55:56.648220 | orchestrator | Thursday 08 January 2026 00:51:13 +0000 (0:00:01.053) 0:02:05.860 ****** 2026-01-08 00:55:56.648227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 00:55:56.648238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.648243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 00:55:56.648254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.648798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 00:55:56.648839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.648854 | orchestrator | 2026-01-08 00:55:56.648858 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-08 00:55:56.648863 | orchestrator | Thursday 08 January 2026 00:51:18 +0000 (0:00:05.019) 0:02:10.879 ****** 2026-01-08 00:55:56.648898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 00:55:56.648907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.648917 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.648948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 00:55:56.648959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.648966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 00:55:56.648971 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2025.1', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.649014 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649018 | orchestrator | 2026-01-08 00:55:56.649022 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-08 00:55:56.649026 | orchestrator | Thursday 08 January 2026 00:51:21 +0000 (0:00:02.704) 0:02:13.584 ****** 2026-01-08 00:55:56.649031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-08 00:55:56.649036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-08 00:55:56.649040 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.649044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-08 00:55:56.649075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-08 00:55:56.649081 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-08 00:55:56.649094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-08 00:55:56.649098 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649102 | orchestrator | 2026-01-08 00:55:56.649108 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-08 00:55:56.649112 | orchestrator | Thursday 08 January 2026 00:51:24 +0000 (0:00:03.742) 0:02:17.327 ****** 2026-01-08 00:55:56.649116 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.649120 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.649124 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.649128 | orchestrator | 2026-01-08 00:55:56.649132 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-08 00:55:56.649143 | orchestrator | Thursday 08 January 2026 00:51:26 +0000 (0:00:01.405) 0:02:18.733 ****** 2026-01-08 00:55:56.649147 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.649151 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.649155 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.649158 | orchestrator | 2026-01-08 00:55:56.649162 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-08 00:55:56.649166 | orchestrator | Thursday 08 January 2026 00:51:28 +0000 (0:00:02.318) 0:02:21.052 ****** 2026-01-08 00:55:56.649170 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.649174 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649178 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649181 | orchestrator | 2026-01-08 00:55:56.649185 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-08 00:55:56.649189 | orchestrator | Thursday 08 January 2026 00:51:29 +0000 (0:00:00.316) 0:02:21.368 ****** 2026-01-08 00:55:56.649193 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.649197 | orchestrator | 2026-01-08 00:55:56.649200 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-08 00:55:56.649204 | orchestrator | Thursday 08 January 2026 00:51:29 +0000 (0:00:00.880) 0:02:22.249 ****** 2026-01-08 00:55:56.649208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.649213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.649244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.649250 | orchestrator | 2026-01-08 00:55:56.649254 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-08 00:55:56.649258 | orchestrator | Thursday 08 January 2026 00:51:33 +0000 (0:00:03.709) 0:02:25.959 ****** 2026-01-08 00:55:56.649266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.649271 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.649275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.649279 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.649287 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649291 | orchestrator | 2026-01-08 00:55:56.649295 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-08 00:55:56.649299 | orchestrator | Thursday 08 January 2026 00:51:34 +0000 (0:00:00.419) 0:02:26.378 ****** 2026-01-08 00:55:56.649303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.649310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.649314 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.649348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.649357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.649362 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.649385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.649389 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649393 | orchestrator | 2026-01-08 00:55:56.649397 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-08 00:55:56.649401 | orchestrator | Thursday 08 January 2026 00:51:34 +0000 (0:00:00.745) 0:02:27.124 ****** 2026-01-08 00:55:56.649405 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.649408 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.649412 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.649416 | orchestrator | 2026-01-08 00:55:56.649420 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-08 00:55:56.649423 | orchestrator | Thursday 08 January 2026 00:51:36 +0000 (0:00:01.635) 0:02:28.759 ****** 2026-01-08 00:55:56.649427 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.649433 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.649437 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.649441 | orchestrator | 2026-01-08 00:55:56.649444 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-08 00:55:56.649448 | orchestrator | Thursday 08 January 2026 00:51:38 +0000 (0:00:02.115) 0:02:30.875 ****** 2026-01-08 00:55:56.649452 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.649456 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649460 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649463 | orchestrator | 2026-01-08 00:55:56.649467 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-08 00:55:56.649471 | orchestrator | Thursday 08 January 2026 00:51:38 +0000 (0:00:00.319) 0:02:31.194 ****** 2026-01-08 00:55:56.649475 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.649479 | orchestrator | 2026-01-08 00:55:56.649482 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-08 00:55:56.649486 | orchestrator | Thursday 08 January 2026 00:51:39 +0000 (0:00:00.943) 0:02:32.138 ****** 2026-01-08 00:55:56.649516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 00:55:56.649533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 00:55:56.649565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 00:55:56.649574 | orchestrator | 2026-01-08 00:55:56.649578 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-08 00:55:56.649581 | orchestrator | Thursday 08 January 2026 00:51:44 +0000 (0:00:04.413) 0:02:36.551 ****** 2026-01-08 00:55:56.649588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 00:55:56.649682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 00:55:56.649695 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649699 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.649707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 00:55:56.649715 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649719 | orchestrator | 2026-01-08 00:55:56.649723 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-08 00:55:56.649727 | orchestrator | Thursday 08 January 2026 00:51:45 +0000 (0:00:01.232) 0:02:37.784 ****** 2026-01-08 00:55:56.649733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-08 00:55:56.649767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-08 00:55:56.649773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-08 00:55:56.649781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-08 00:55:56.649786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-08 00:55:56.649792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-08 00:55:56.649798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-08 00:55:56.649805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-08 00:55:56.649809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-08 00:55:56.649814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-08 00:55:56.649818 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.649821 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-08 00:55:56.649829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-08 00:55:56.649833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-08 00:55:56.649837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-08 00:55:56.649870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-08 00:55:56.649881 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649886 | orchestrator | 2026-01-08 00:55:56.649890 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-08 00:55:56.649894 | orchestrator | Thursday 08 January 2026 00:51:46 +0000 (0:00:01.165) 0:02:38.950 ****** 2026-01-08 00:55:56.649897 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.649901 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.649905 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.649909 | orchestrator | 2026-01-08 00:55:56.649912 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-08 00:55:56.649916 | orchestrator | Thursday 08 January 2026 00:51:48 +0000 (0:00:01.497) 0:02:40.447 ****** 2026-01-08 00:55:56.649920 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.649924 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.649927 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.649931 | orchestrator | 2026-01-08 00:55:56.649935 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-08 00:55:56.649939 | orchestrator | Thursday 08 January 2026 00:51:50 +0000 (0:00:02.621) 0:02:43.069 ****** 2026-01-08 00:55:56.649943 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.649946 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649950 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649954 | orchestrator | 2026-01-08 00:55:56.649961 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-08 00:55:56.649965 | orchestrator | Thursday 08 January 2026 00:51:51 +0000 (0:00:00.601) 0:02:43.670 ****** 2026-01-08 00:55:56.649969 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.649973 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.649977 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.649981 | orchestrator | 2026-01-08 00:55:56.649985 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-08 00:55:56.649991 | orchestrator | Thursday 08 January 2026 00:51:51 +0000 (0:00:00.607) 0:02:44.277 ****** 2026-01-08 00:55:56.649995 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.649998 | orchestrator | 2026-01-08 00:55:56.650002 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-08 00:55:56.650006 | orchestrator | Thursday 08 January 2026 00:51:53 +0000 (0:00:01.496) 0:02:45.773 ****** 2026-01-08 00:55:56.650010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 00:55:56.650036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 00:55:56.650040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 00:55:56.650076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 00:55:56.650085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 00:55:56.650104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 00:55:56.650109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 00:55:56.650113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 00:55:56.650146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 00:55:56.650152 | orchestrator | 2026-01-08 00:55:56.650161 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-08 00:55:56.650165 | orchestrator | Thursday 08 January 2026 00:51:57 +0000 (0:00:04.412) 0:02:50.185 ****** 2026-01-08 00:55:56.650172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 00:55:56.650176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 00:55:56.650180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 00:55:56.650184 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.650188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 00:55:56.650220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 00:55:56.650233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 00:55:56.650238 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.650244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 00:55:56.650248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 00:55:56.650252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 00:55:56.650256 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.650260 | orchestrator | 2026-01-08 00:55:56.650264 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-08 00:55:56.650268 | orchestrator | Thursday 08 January 2026 00:51:58 +0000 (0:00:00.538) 0:02:50.724 ****** 2026-01-08 00:55:56.650272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-08 00:55:56.650276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-08 00:55:56.650313 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.650321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-08 00:55:56.650326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-08 00:55:56.650330 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.650334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-08 00:55:56.650338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-08 00:55:56.650342 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.650346 | orchestrator | 2026-01-08 00:55:56.650349 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-08 00:55:56.650355 | orchestrator | Thursday 08 January 2026 00:51:59 +0000 (0:00:00.841) 0:02:51.565 ****** 2026-01-08 00:55:56.650359 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.650363 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.650367 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.650386 | orchestrator | 2026-01-08 00:55:56.650390 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-08 00:55:56.650393 | orchestrator | Thursday 08 January 2026 00:52:00 +0000 (0:00:01.468) 0:02:53.034 ****** 2026-01-08 00:55:56.650397 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.650401 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.650405 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.650409 | orchestrator | 2026-01-08 00:55:56.650413 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-08 00:55:56.650417 | orchestrator | Thursday 08 January 2026 00:52:02 +0000 (0:00:02.268) 0:02:55.302 ****** 2026-01-08 00:55:56.650421 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.650425 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.650428 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.650432 | orchestrator | 2026-01-08 00:55:56.650436 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-08 00:55:56.650440 | orchestrator | Thursday 08 January 2026 00:52:03 +0000 (0:00:00.360) 0:02:55.662 ****** 2026-01-08 00:55:56.650444 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.650448 | orchestrator | 2026-01-08 00:55:56.650452 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-08 00:55:56.650456 | orchestrator | Thursday 08 January 2026 00:52:04 +0000 (0:00:01.631) 0:02:57.293 ****** 2026-01-08 00:55:56.650460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.650501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.650518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.650530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650534 | orchestrator | 2026-01-08 00:55:56.650538 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-08 00:55:56.650542 | orchestrator | Thursday 08 January 2026 00:52:08 +0000 (0:00:03.253) 0:03:00.547 ****** 2026-01-08 00:55:56.650574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.650586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650590 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.650594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.650599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650606 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.650637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.650643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650647 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.650651 | orchestrator | 2026-01-08 00:55:56.650657 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-08 00:55:56.650661 | orchestrator | Thursday 08 January 2026 00:52:08 +0000 (0:00:00.558) 0:03:01.105 ****** 2026-01-08 00:55:56.650668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.650672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.650676 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.650681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.650685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.650692 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.650696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.650700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.650704 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.650708 | orchestrator | 2026-01-08 00:55:56.650712 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-08 00:55:56.650716 | orchestrator | Thursday 08 January 2026 00:52:09 +0000 (0:00:00.832) 0:03:01.938 ****** 2026-01-08 00:55:56.650720 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.650724 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.650727 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.650731 | orchestrator | 2026-01-08 00:55:56.650735 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-08 00:55:56.650739 | orchestrator | Thursday 08 January 2026 00:52:11 +0000 (0:00:01.400) 0:03:03.338 ****** 2026-01-08 00:55:56.650743 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.650747 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.650751 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.650755 | orchestrator | 2026-01-08 00:55:56.650759 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-08 00:55:56.650763 | orchestrator | Thursday 08 January 2026 00:52:13 +0000 (0:00:02.212) 0:03:05.550 ****** 2026-01-08 00:55:56.650767 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.650771 | orchestrator | 2026-01-08 00:55:56.650775 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-08 00:55:56.650779 | orchestrator | Thursday 08 January 2026 00:52:14 +0000 (0:00:01.592) 0:03:07.142 ****** 2026-01-08 00:55:56.650811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.650818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.650845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.650899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650911 | orchestrator | 2026-01-08 00:55:56.650915 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-08 00:55:56.650919 | orchestrator | Thursday 08 January 2026 00:52:20 +0000 (0:00:05.746) 0:03:12.889 ****** 2026-01-08 00:55:56.650953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.650962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.650981 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.650985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.651015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.651022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.651034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.651038 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.651046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.651050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.651081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.651087 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651091 | orchestrator | 2026-01-08 00:55:56.651098 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-08 00:55:56.651103 | orchestrator | Thursday 08 January 2026 00:52:21 +0000 (0:00:00.842) 0:03:13.732 ****** 2026-01-08 00:55:56.651113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.651117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.651121 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.651132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.651136 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.651144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.651149 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651153 | orchestrator | 2026-01-08 00:55:56.651157 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-08 00:55:56.651161 | orchestrator | Thursday 08 January 2026 00:52:22 +0000 (0:00:00.789) 0:03:14.522 ****** 2026-01-08 00:55:56.651165 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.651169 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.651173 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.651177 | orchestrator | 2026-01-08 00:55:56.651182 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-08 00:55:56.651186 | orchestrator | Thursday 08 January 2026 00:52:23 +0000 (0:00:01.145) 0:03:15.667 ****** 2026-01-08 00:55:56.651190 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.651194 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.651198 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.651202 | orchestrator | 2026-01-08 00:55:56.651206 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-08 00:55:56.651210 | orchestrator | Thursday 08 January 2026 00:52:25 +0000 (0:00:01.954) 0:03:17.622 ****** 2026-01-08 00:55:56.651214 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.651218 | orchestrator | 2026-01-08 00:55:56.651223 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-08 00:55:56.651226 | orchestrator | Thursday 08 January 2026 00:52:26 +0000 (0:00:01.435) 0:03:19.057 ****** 2026-01-08 00:55:56.651231 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-08 00:55:56.651235 | orchestrator | 2026-01-08 00:55:56.651238 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-08 00:55:56.651242 | orchestrator | Thursday 08 January 2026 00:52:29 +0000 (0:00:02.955) 0:03:22.013 ****** 2026-01-08 00:55:56.651282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:55:56.651300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-08 00:55:56.651305 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:55:56.651314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-08 00:55:56.651352 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:55:56.651366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-08 00:55:56.651387 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651392 | orchestrator | 2026-01-08 00:55:56.651396 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-08 00:55:56.651400 | orchestrator | Thursday 08 January 2026 00:52:33 +0000 (0:00:03.613) 0:03:25.626 ****** 2026-01-08 00:55:56.651438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:55:56.651448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-08 00:55:56.651455 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:55:56.651468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-08 00:55:56.651475 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:55:56.651520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2025.1', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-08 00:55:56.651525 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651529 | orchestrator | 2026-01-08 00:55:56.651533 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-08 00:55:56.651537 | orchestrator | Thursday 08 January 2026 00:52:36 +0000 (0:00:02.950) 0:03:28.577 ****** 2026-01-08 00:55:56.651541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-08 00:55:56.651546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-08 00:55:56.651554 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-08 00:55:56.651592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-08 00:55:56.651597 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-08 00:55:56.651613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-08 00:55:56.651617 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651622 | orchestrator | 2026-01-08 00:55:56.651626 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-08 00:55:56.651630 | orchestrator | Thursday 08 January 2026 00:52:38 +0000 (0:00:02.332) 0:03:30.909 ****** 2026-01-08 00:55:56.651634 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.651638 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.651642 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.651646 | orchestrator | 2026-01-08 00:55:56.651650 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-08 00:55:56.651655 | orchestrator | Thursday 08 January 2026 00:52:40 +0000 (0:00:01.945) 0:03:32.854 ****** 2026-01-08 00:55:56.651659 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651663 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651667 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651670 | orchestrator | 2026-01-08 00:55:56.651674 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-08 00:55:56.651678 | orchestrator | Thursday 08 January 2026 00:52:42 +0000 (0:00:01.777) 0:03:34.631 ****** 2026-01-08 00:55:56.651686 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651690 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651694 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651698 | orchestrator | 2026-01-08 00:55:56.651702 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-08 00:55:56.651705 | orchestrator | Thursday 08 January 2026 00:52:42 +0000 (0:00:00.320) 0:03:34.951 ****** 2026-01-08 00:55:56.651709 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.651713 | orchestrator | 2026-01-08 00:55:56.651717 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-08 00:55:56.651721 | orchestrator | Thursday 08 January 2026 00:52:44 +0000 (0:00:01.419) 0:03:36.371 ****** 2026-01-08 00:55:56.651725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-08 00:55:56.651762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-08 00:55:56.651770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-08 00:55:56.651774 | orchestrator | 2026-01-08 00:55:56.651781 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-08 00:55:56.651785 | orchestrator | Thursday 08 January 2026 00:52:45 +0000 (0:00:01.702) 0:03:38.073 ****** 2026-01-08 00:55:56.651792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-08 00:55:56.651800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-08 00:55:56.651805 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651809 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2025.1', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-08 00:55:56.651817 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651821 | orchestrator | 2026-01-08 00:55:56.651825 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-08 00:55:56.651829 | orchestrator | Thursday 08 January 2026 00:52:46 +0000 (0:00:00.385) 0:03:38.459 ****** 2026-01-08 00:55:56.651833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-08 00:55:56.651837 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-08 00:55:56.651881 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-08 00:55:56.651889 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651893 | orchestrator | 2026-01-08 00:55:56.651897 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-08 00:55:56.651901 | orchestrator | Thursday 08 January 2026 00:52:47 +0000 (0:00:00.999) 0:03:39.459 ****** 2026-01-08 00:55:56.651905 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651910 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651914 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651918 | orchestrator | 2026-01-08 00:55:56.651922 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-08 00:55:56.651926 | orchestrator | Thursday 08 January 2026 00:52:47 +0000 (0:00:00.446) 0:03:39.906 ****** 2026-01-08 00:55:56.651932 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651937 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651940 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651949 | orchestrator | 2026-01-08 00:55:56.651953 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-08 00:55:56.651957 | orchestrator | Thursday 08 January 2026 00:52:48 +0000 (0:00:01.350) 0:03:41.257 ****** 2026-01-08 00:55:56.651961 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.651965 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.651973 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.651977 | orchestrator | 2026-01-08 00:55:56.651981 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-08 00:55:56.651985 | orchestrator | Thursday 08 January 2026 00:52:49 +0000 (0:00:00.313) 0:03:41.570 ****** 2026-01-08 00:55:56.651989 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.651993 | orchestrator | 2026-01-08 00:55:56.651997 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-08 00:55:56.652001 | orchestrator | Thursday 08 January 2026 00:52:50 +0000 (0:00:01.586) 0:03:43.156 ****** 2026-01-08 00:55:56.652007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.652013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-08 00:55:56.652062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-08 00:55:56.652074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-08 00:55:56.652125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.652132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-08 00:55:56.652153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.652161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.652210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.652223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-08 00:55:56.652227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-08 00:55:56.652264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-08 00:55:56.652297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.652301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-08 00:55:56.652348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.652360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.652439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-08 00:55:56.652449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.652454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-08 00:55:56.652458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-08 00:55:56.652527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.652531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-08 00:55:56.652540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.652590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.652595 | orchestrator | 2026-01-08 00:55:56.652602 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-08 00:55:56.652606 | orchestrator | Thursday 08 January 2026 00:52:56 +0000 (0:00:05.582) 0:03:48.739 ****** 2026-01-08 00:55:56.652610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.652615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-08 00:55:56.652662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-08 00:55:56.652670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.652733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-08 00:55:56.652740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.652752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.652756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-08 00:55:56.652797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2025.1', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-08 00:55:56.652817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-dhcp-agent:2025.1', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-08 00:55:56.652825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-08 00:55:56.652859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/neutron-l3-agent:2025.1', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-08 00:55:56.652875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.652929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2025.1', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.652950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-08 00:55:56.652954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.652962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.652966 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.652971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.653003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-08 00:55:56.653010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.653021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-08 00:55:56.653031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.653067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-08 00:55:56.653073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2025.1', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-08 00:55:56.653085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.653094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2025.1', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.653135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2025.1', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-08 00:55:56.653142 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.653149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2025.1', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-08 00:55:56.653156 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.653161 | orchestrator | 2026-01-08 00:55:56.653165 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-08 00:55:56.653169 | orchestrator | Thursday 08 January 2026 00:52:58 +0000 (0:00:01.791) 0:03:50.530 ****** 2026-01-08 00:55:56.653173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.653177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.653185 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.653189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.653193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.653197 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.653201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.653206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.653210 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.653214 | orchestrator | 2026-01-08 00:55:56.653221 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-08 00:55:56.653225 | orchestrator | Thursday 08 January 2026 00:53:00 +0000 (0:00:01.972) 0:03:52.503 ****** 2026-01-08 00:55:56.653229 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.653234 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.653238 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.653242 | orchestrator | 2026-01-08 00:55:56.653246 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-08 00:55:56.653250 | orchestrator | Thursday 08 January 2026 00:53:01 +0000 (0:00:01.329) 0:03:53.833 ****** 2026-01-08 00:55:56.653254 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.653258 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.653262 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.653266 | orchestrator | 2026-01-08 00:55:56.653270 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-08 00:55:56.653274 | orchestrator | Thursday 08 January 2026 00:53:03 +0000 (0:00:02.202) 0:03:56.035 ****** 2026-01-08 00:55:56.653278 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.653282 | orchestrator | 2026-01-08 00:55:56.653286 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-08 00:55:56.653290 | orchestrator | Thursday 08 January 2026 00:53:05 +0000 (0:00:01.599) 0:03:57.635 ****** 2026-01-08 00:55:56.653330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 00:55:56.653341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 00:55:56.653355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 00:55:56.653359 | orchestrator | 2026-01-08 00:55:56.653363 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-08 00:55:56.653368 | orchestrator | Thursday 08 January 2026 00:53:09 +0000 (0:00:04.059) 0:04:01.694 ****** 2026-01-08 00:55:56.653427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 00:55:56.653433 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.653438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 00:55:56.653449 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.653456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 00:55:56.653460 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.653464 | orchestrator | 2026-01-08 00:55:56.653468 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-08 00:55:56.653472 | orchestrator | Thursday 08 January 2026 00:53:10 +0000 (0:00:00.678) 0:04:02.372 ****** 2026-01-08 00:55:56.653477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.653482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.653486 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.653491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.653495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.653500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.653504 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.653540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.653546 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.653550 | orchestrator | 2026-01-08 00:55:56.653554 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-08 00:55:56.653561 | orchestrator | Thursday 08 January 2026 00:53:11 +0000 (0:00:01.121) 0:04:03.494 ****** 2026-01-08 00:55:56.653569 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.653573 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.653577 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.653581 | orchestrator | 2026-01-08 00:55:56.653585 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-08 00:55:56.653589 | orchestrator | Thursday 08 January 2026 00:53:12 +0000 (0:00:01.477) 0:04:04.971 ****** 2026-01-08 00:55:56.653593 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.653597 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.653601 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.653605 | orchestrator | 2026-01-08 00:55:56.653609 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-08 00:55:56.653613 | orchestrator | Thursday 08 January 2026 00:53:14 +0000 (0:00:02.257) 0:04:07.228 ****** 2026-01-08 00:55:56.653617 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.653622 | orchestrator | 2026-01-08 00:55:56.653626 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-08 00:55:56.653630 | orchestrator | Thursday 08 January 2026 00:53:16 +0000 (0:00:01.476) 0:04:08.705 ****** 2026-01-08 00:55:56.653638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.653643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.653686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.653700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.653709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.653744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.653778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653790 | orchestrator | 2026-01-08 00:55:56.653794 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-08 00:55:56.653799 | orchestrator | Thursday 08 January 2026 00:53:24 +0000 (0:00:07.745) 0:04:16.450 ****** 2026-01-08 00:55:56.653819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.653833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.653844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653857 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.653864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.653895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.653903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.653919 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.653925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.653933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.654006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.654044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2025.1', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.654052 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654058 | orchestrator | 2026-01-08 00:55:56.654064 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-08 00:55:56.654069 | orchestrator | Thursday 08 January 2026 00:53:24 +0000 (0:00:00.811) 0:04:17.261 ****** 2026-01-08 00:55:56.654076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654139 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.654178 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.654181 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.654185 | orchestrator | 2026-01-08 00:55:56.654189 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-08 00:55:56.654193 | orchestrator | Thursday 08 January 2026 00:53:26 +0000 (0:00:01.101) 0:04:18.363 ****** 2026-01-08 00:55:56.654197 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.654201 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.654205 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.654209 | orchestrator | 2026-01-08 00:55:56.654213 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-08 00:55:56.654217 | orchestrator | Thursday 08 January 2026 00:53:27 +0000 (0:00:01.653) 0:04:20.016 ****** 2026-01-08 00:55:56.654224 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.654230 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.654236 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.654242 | orchestrator | 2026-01-08 00:55:56.654247 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-08 00:55:56.654252 | orchestrator | Thursday 08 January 2026 00:53:29 +0000 (0:00:02.008) 0:04:22.024 ****** 2026-01-08 00:55:56.654258 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.654264 | orchestrator | 2026-01-08 00:55:56.654270 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-08 00:55:56.654275 | orchestrator | Thursday 08 January 2026 00:53:31 +0000 (0:00:01.725) 0:04:23.750 ****** 2026-01-08 00:55:56.654282 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-08 00:55:56.654288 | orchestrator | 2026-01-08 00:55:56.654298 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-08 00:55:56.654305 | orchestrator | Thursday 08 January 2026 00:53:32 +0000 (0:00:01.198) 0:04:24.949 ****** 2026-01-08 00:55:56.654312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-08 00:55:56.654327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-08 00:55:56.654334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-08 00:55:56.654341 | orchestrator | 2026-01-08 00:55:56.654348 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-08 00:55:56.654353 | orchestrator | Thursday 08 January 2026 00:53:37 +0000 (0:00:04.500) 0:04:29.450 ****** 2026-01-08 00:55:56.654358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-08 00:55:56.654362 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.654404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-08 00:55:56.654411 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.654416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-08 00:55:56.654421 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654425 | orchestrator | 2026-01-08 00:55:56.654430 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-08 00:55:56.654434 | orchestrator | Thursday 08 January 2026 00:53:38 +0000 (0:00:01.433) 0:04:30.883 ****** 2026-01-08 00:55:56.654439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-08 00:55:56.654448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-08 00:55:56.654459 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.654464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-08 00:55:56.654469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-08 00:55:56.654473 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.654478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-08 00:55:56.654483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-08 00:55:56.654487 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654492 | orchestrator | 2026-01-08 00:55:56.654496 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-08 00:55:56.654501 | orchestrator | Thursday 08 January 2026 00:53:40 +0000 (0:00:02.051) 0:04:32.934 ****** 2026-01-08 00:55:56.654505 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.654509 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.654514 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.654518 | orchestrator | 2026-01-08 00:55:56.654523 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-08 00:55:56.654527 | orchestrator | Thursday 08 January 2026 00:53:43 +0000 (0:00:02.901) 0:04:35.836 ****** 2026-01-08 00:55:56.654532 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.654536 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.654540 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.654545 | orchestrator | 2026-01-08 00:55:56.654549 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-08 00:55:56.654553 | orchestrator | Thursday 08 January 2026 00:53:47 +0000 (0:00:03.513) 0:04:39.350 ****** 2026-01-08 00:55:56.654559 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-08 00:55:56.654563 | orchestrator | 2026-01-08 00:55:56.654568 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-08 00:55:56.654572 | orchestrator | Thursday 08 January 2026 00:53:47 +0000 (0:00:00.952) 0:04:40.302 ****** 2026-01-08 00:55:56.654577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-08 00:55:56.654582 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.654599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-08 00:55:56.654611 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.654616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-08 00:55:56.654621 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654625 | orchestrator | 2026-01-08 00:55:56.654629 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-08 00:55:56.654634 | orchestrator | Thursday 08 January 2026 00:53:49 +0000 (0:00:01.098) 0:04:41.401 ****** 2026-01-08 00:55:56.654641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-08 00:55:56.654646 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.654650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-08 00:55:56.654655 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.654659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-08 00:55:56.654664 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654668 | orchestrator | 2026-01-08 00:55:56.654672 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-08 00:55:56.654677 | orchestrator | Thursday 08 January 2026 00:53:50 +0000 (0:00:01.365) 0:04:42.766 ****** 2026-01-08 00:55:56.654681 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.654685 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.654690 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654694 | orchestrator | 2026-01-08 00:55:56.654699 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-08 00:55:56.654704 | orchestrator | Thursday 08 January 2026 00:53:52 +0000 (0:00:01.629) 0:04:44.396 ****** 2026-01-08 00:55:56.654708 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.654713 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.654717 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.654722 | orchestrator | 2026-01-08 00:55:56.654726 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-08 00:55:56.654730 | orchestrator | Thursday 08 January 2026 00:53:54 +0000 (0:00:02.378) 0:04:46.775 ****** 2026-01-08 00:55:56.654733 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.654737 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.654744 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.654748 | orchestrator | 2026-01-08 00:55:56.654752 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-08 00:55:56.654758 | orchestrator | Thursday 08 January 2026 00:53:57 +0000 (0:00:03.266) 0:04:50.041 ****** 2026-01-08 00:55:56.654780 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-08 00:55:56.654787 | orchestrator | 2026-01-08 00:55:56.654793 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-08 00:55:56.654799 | orchestrator | Thursday 08 January 2026 00:53:58 +0000 (0:00:00.916) 0:04:50.957 ****** 2026-01-08 00:55:56.654806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-08 00:55:56.654812 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.654818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-08 00:55:56.654829 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.654835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-08 00:55:56.654842 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654848 | orchestrator | 2026-01-08 00:55:56.654855 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-08 00:55:56.654861 | orchestrator | Thursday 08 January 2026 00:54:00 +0000 (0:00:01.690) 0:04:52.648 ****** 2026-01-08 00:55:56.654867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-08 00:55:56.654874 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.654881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-08 00:55:56.654894 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.654900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-08 00:55:56.654907 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654913 | orchestrator | 2026-01-08 00:55:56.654918 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-08 00:55:56.654922 | orchestrator | Thursday 08 January 2026 00:54:01 +0000 (0:00:01.034) 0:04:53.683 ****** 2026-01-08 00:55:56.654926 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.654930 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.654953 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.654957 | orchestrator | 2026-01-08 00:55:56.654961 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-08 00:55:56.654965 | orchestrator | Thursday 08 January 2026 00:54:02 +0000 (0:00:01.483) 0:04:55.166 ****** 2026-01-08 00:55:56.654969 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.654973 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.654977 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.654981 | orchestrator | 2026-01-08 00:55:56.654984 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-08 00:55:56.654988 | orchestrator | Thursday 08 January 2026 00:54:05 +0000 (0:00:02.428) 0:04:57.594 ****** 2026-01-08 00:55:56.654992 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.654996 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.655000 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.655004 | orchestrator | 2026-01-08 00:55:56.655008 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-08 00:55:56.655012 | orchestrator | Thursday 08 January 2026 00:54:08 +0000 (0:00:03.246) 0:05:00.840 ****** 2026-01-08 00:55:56.655018 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.655025 | orchestrator | 2026-01-08 00:55:56.655031 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-08 00:55:56.655037 | orchestrator | Thursday 08 January 2026 00:54:10 +0000 (0:00:01.633) 0:05:02.473 ****** 2026-01-08 00:55:56.655049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 00:55:56.655057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 00:55:56.655070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.655107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 00:55:56.655115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 00:55:56.655119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 00:55:56.655127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 00:55:56.655150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.655167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.655205 | orchestrator | 2026-01-08 00:55:56.655212 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-08 00:55:56.655218 | orchestrator | Thursday 08 January 2026 00:54:13 +0000 (0:00:03.661) 0:05:06.135 ****** 2026-01-08 00:55:56.655224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 00:55:56.655250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 00:55:56.655256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.655277 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.655281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 00:55:56.655285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 00:55:56.655301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.655316 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.655325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 00:55:56.655329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 00:55:56.655333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 00:55:56.655355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 00:55:56.655359 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.655363 | orchestrator | 2026-01-08 00:55:56.655367 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-08 00:55:56.655408 | orchestrator | Thursday 08 January 2026 00:54:14 +0000 (0:00:01.075) 0:05:07.210 ****** 2026-01-08 00:55:56.655415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-08 00:55:56.655429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-08 00:55:56.655441 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.655447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-08 00:55:56.655453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-08 00:55:56.655459 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.655465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-08 00:55:56.655471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-08 00:55:56.655478 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.655484 | orchestrator | 2026-01-08 00:55:56.655490 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-08 00:55:56.655497 | orchestrator | Thursday 08 January 2026 00:54:16 +0000 (0:00:01.290) 0:05:08.501 ****** 2026-01-08 00:55:56.655501 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.655505 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.655509 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.655513 | orchestrator | 2026-01-08 00:55:56.655517 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-08 00:55:56.655521 | orchestrator | Thursday 08 January 2026 00:54:17 +0000 (0:00:01.193) 0:05:09.694 ****** 2026-01-08 00:55:56.655525 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.655529 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.655533 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.655537 | orchestrator | 2026-01-08 00:55:56.655541 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-08 00:55:56.655545 | orchestrator | Thursday 08 January 2026 00:54:19 +0000 (0:00:02.083) 0:05:11.778 ****** 2026-01-08 00:55:56.655551 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.655557 | orchestrator | 2026-01-08 00:55:56.655563 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-08 00:55:56.655568 | orchestrator | Thursday 08 January 2026 00:54:21 +0000 (0:00:01.730) 0:05:13.508 ****** 2026-01-08 00:55:56.655599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.655608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.655625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.655633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:55:56.655655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:55:56.655661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:55:56.655670 | orchestrator | 2026-01-08 00:55:56.655677 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-08 00:55:56.655681 | orchestrator | Thursday 08 January 2026 00:54:26 +0000 (0:00:05.123) 0:05:18.632 ****** 2026-01-08 00:55:56.655686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.655690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:55:56.655694 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.655710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.655721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:55:56.655725 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.655729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.655733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:55:56.655737 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.655741 | orchestrator | 2026-01-08 00:55:56.655745 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-08 00:55:56.655749 | orchestrator | Thursday 08 January 2026 00:54:26 +0000 (0:00:00.662) 0:05:19.294 ****** 2026-01-08 00:55:56.655753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.655773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-08 00:55:56.655778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-08 00:55:56.655784 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.655787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.655792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-08 00:55:56.655798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-08 00:55:56.655802 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.655806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.655810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-08 00:55:56.655814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-08 00:55:56.655818 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.655822 | orchestrator | 2026-01-08 00:55:56.655826 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-08 00:55:56.655830 | orchestrator | Thursday 08 January 2026 00:54:28 +0000 (0:00:01.683) 0:05:20.977 ****** 2026-01-08 00:55:56.655834 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.655838 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.655841 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.655845 | orchestrator | 2026-01-08 00:55:56.655849 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-08 00:55:56.655853 | orchestrator | Thursday 08 January 2026 00:54:29 +0000 (0:00:00.439) 0:05:21.417 ****** 2026-01-08 00:55:56.655857 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.655861 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.655864 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.655868 | orchestrator | 2026-01-08 00:55:56.655872 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-08 00:55:56.655876 | orchestrator | Thursday 08 January 2026 00:54:30 +0000 (0:00:01.399) 0:05:22.816 ****** 2026-01-08 00:55:56.655880 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.655888 | orchestrator | 2026-01-08 00:55:56.655892 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-08 00:55:56.655896 | orchestrator | Thursday 08 January 2026 00:54:32 +0000 (0:00:01.755) 0:05:24.571 ****** 2026-01-08 00:55:56.655913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-08 00:55:56.655918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-08 00:55:56.655922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 00:55:56.655927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 00:55:56.655931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.655938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.655942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.655957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-08 00:55:56.655985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.655993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.655999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 00:55:56.656017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.656070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-08 00:55:56.656082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.656107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-08 00:55:56.656120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:55:56.656125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-08 00:55:56.656158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656184 | orchestrator | 2026-01-08 00:55:56.656188 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-08 00:55:56.656192 | orchestrator | Thursday 08 January 2026 00:54:36 +0000 (0:00:04.440) 0:05:29.012 ****** 2026-01-08 00:55:56.656209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-08 00:55:56.656214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 00:55:56.656218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-08 00:55:56.656251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.656256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 00:55:56.656262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-08 00:55:56.656267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656295 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.656311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-08 00:55:56.656315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-08 00:55:56.656333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656337 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 00:55:56.656351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:55:56.656414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-08 00:55:56.656421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 00:55:56.656435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 00:55:56.656439 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656443 | orchestrator | 2026-01-08 00:55:56.656447 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-08 00:55:56.656451 | orchestrator | Thursday 08 January 2026 00:54:37 +0000 (0:00:00.980) 0:05:29.992 ****** 2026-01-08 00:55:56.656456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-08 00:55:56.656461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-08 00:55:56.656467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.656473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.656478 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-08 00:55:56.656486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-08 00:55:56.656493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.656500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.656504 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-08 00:55:56.656512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-08 00:55:56.656516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.656520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-08 00:55:56.656523 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656527 | orchestrator | 2026-01-08 00:55:56.656531 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-08 00:55:56.656535 | orchestrator | Thursday 08 January 2026 00:54:38 +0000 (0:00:01.056) 0:05:31.048 ****** 2026-01-08 00:55:56.656539 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656543 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656547 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656550 | orchestrator | 2026-01-08 00:55:56.656554 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-08 00:55:56.656558 | orchestrator | Thursday 08 January 2026 00:54:39 +0000 (0:00:00.795) 0:05:31.844 ****** 2026-01-08 00:55:56.656562 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656566 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656569 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656573 | orchestrator | 2026-01-08 00:55:56.656577 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-08 00:55:56.656581 | orchestrator | Thursday 08 January 2026 00:54:40 +0000 (0:00:01.365) 0:05:33.209 ****** 2026-01-08 00:55:56.656585 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.656589 | orchestrator | 2026-01-08 00:55:56.656592 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-08 00:55:56.656599 | orchestrator | Thursday 08 January 2026 00:54:42 +0000 (0:00:01.492) 0:05:34.702 ****** 2026-01-08 00:55:56.656606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:55:56.656614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:55:56.656618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-08 00:55:56.656623 | orchestrator | 2026-01-08 00:55:56.656626 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-08 00:55:56.656630 | orchestrator | Thursday 08 January 2026 00:54:44 +0000 (0:00:02.627) 0:05:37.330 ****** 2026-01-08 00:55:56.656637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:55:56.656644 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:55:56.656653 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2025.1', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-08 00:55:56.656664 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656667 | orchestrator | 2026-01-08 00:55:56.656671 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-08 00:55:56.656675 | orchestrator | Thursday 08 January 2026 00:54:45 +0000 (0:00:00.850) 0:05:38.180 ****** 2026-01-08 00:55:56.656679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-08 00:55:56.656684 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-08 00:55:56.656692 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-08 00:55:56.656699 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656703 | orchestrator | 2026-01-08 00:55:56.656707 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-08 00:55:56.656711 | orchestrator | Thursday 08 January 2026 00:54:46 +0000 (0:00:00.657) 0:05:38.838 ****** 2026-01-08 00:55:56.656715 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656718 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656722 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656726 | orchestrator | 2026-01-08 00:55:56.656730 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-08 00:55:56.656733 | orchestrator | Thursday 08 January 2026 00:54:46 +0000 (0:00:00.451) 0:05:39.289 ****** 2026-01-08 00:55:56.656740 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656744 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656748 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656752 | orchestrator | 2026-01-08 00:55:56.656755 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-08 00:55:56.656759 | orchestrator | Thursday 08 January 2026 00:54:48 +0000 (0:00:01.478) 0:05:40.768 ****** 2026-01-08 00:55:56.656763 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.656767 | orchestrator | 2026-01-08 00:55:56.656771 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-08 00:55:56.656775 | orchestrator | Thursday 08 January 2026 00:54:50 +0000 (0:00:01.821) 0:05:42.589 ****** 2026-01-08 00:55:56.656782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-08 00:55:56.656789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-08 00:55:56.656793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-08 00:55:56.656798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 00:55:56.656807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 00:55:56.656814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 00:55:56.656819 | orchestrator | 2026-01-08 00:55:56.656823 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-08 00:55:56.656827 | orchestrator | Thursday 08 January 2026 00:54:56 +0000 (0:00:06.366) 0:05:48.956 ****** 2026-01-08 00:55:56.656831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-08 00:55:56.656839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 00:55:56.656843 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-08 00:55:56.656856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 00:55:56.656860 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2025.1', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-08 00:55:56.656872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2025.1', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 00:55:56.656878 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656882 | orchestrator | 2026-01-08 00:55:56.656886 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-08 00:55:56.656890 | orchestrator | Thursday 08 January 2026 00:54:57 +0000 (0:00:01.103) 0:05:50.059 ****** 2026-01-08 00:55:56.656894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-08 00:55:56.656898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-08 00:55:56.656903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.656907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.656911 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.656917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-08 00:55:56.656921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-08 00:55:56.656925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.656929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.656936 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.656940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-08 00:55:56.656944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-08 00:55:56.656948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.656952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-08 00:55:56.656956 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.656960 | orchestrator | 2026-01-08 00:55:56.656964 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-08 00:55:56.656967 | orchestrator | Thursday 08 January 2026 00:54:59 +0000 (0:00:01.370) 0:05:51.429 ****** 2026-01-08 00:55:56.656971 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.656975 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.656979 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.656983 | orchestrator | 2026-01-08 00:55:56.656987 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-08 00:55:56.656991 | orchestrator | Thursday 08 January 2026 00:55:00 +0000 (0:00:01.366) 0:05:52.796 ****** 2026-01-08 00:55:56.656994 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.657000 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.657004 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.657008 | orchestrator | 2026-01-08 00:55:56.657012 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-08 00:55:56.657016 | orchestrator | Thursday 08 January 2026 00:55:02 +0000 (0:00:02.198) 0:05:54.994 ****** 2026-01-08 00:55:56.657020 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657023 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657027 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657031 | orchestrator | 2026-01-08 00:55:56.657035 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-08 00:55:56.657038 | orchestrator | Thursday 08 January 2026 00:55:03 +0000 (0:00:00.342) 0:05:55.337 ****** 2026-01-08 00:55:56.657042 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657046 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657050 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657053 | orchestrator | 2026-01-08 00:55:56.657057 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-08 00:55:56.657061 | orchestrator | Thursday 08 January 2026 00:55:03 +0000 (0:00:00.694) 0:05:56.031 ****** 2026-01-08 00:55:56.657065 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657069 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657073 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657076 | orchestrator | 2026-01-08 00:55:56.657080 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-08 00:55:56.657084 | orchestrator | Thursday 08 January 2026 00:55:04 +0000 (0:00:00.324) 0:05:56.355 ****** 2026-01-08 00:55:56.657092 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657095 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657099 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657103 | orchestrator | 2026-01-08 00:55:56.657107 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-08 00:55:56.657110 | orchestrator | Thursday 08 January 2026 00:55:04 +0000 (0:00:00.329) 0:05:56.685 ****** 2026-01-08 00:55:56.657117 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657121 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657124 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657128 | orchestrator | 2026-01-08 00:55:56.657132 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-01-08 00:55:56.657136 | orchestrator | Thursday 08 January 2026 00:55:04 +0000 (0:00:00.307) 0:05:56.993 ****** 2026-01-08 00:55:56.657140 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:55:56.657143 | orchestrator | 2026-01-08 00:55:56.657147 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-08 00:55:56.657151 | orchestrator | Thursday 08 January 2026 00:55:06 +0000 (0:00:01.906) 0:05:58.899 ****** 2026-01-08 00:55:56.657155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.657160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.657164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-08 00:55:56.657171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.657175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.657185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-08 00:55:56.657189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.657193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.657197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-08 00:55:56.657201 | orchestrator | 2026-01-08 00:55:56.657205 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-08 00:55:56.657209 | orchestrator | Thursday 08 January 2026 00:55:09 +0000 (0:00:02.771) 0:06:01.671 ****** 2026-01-08 00:55:56.657213 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:55:56.657217 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:55:56.657221 | orchestrator | } 2026-01-08 00:55:56.657225 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:55:56.657229 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:55:56.657232 | orchestrator | } 2026-01-08 00:55:56.657236 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:55:56.657240 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:55:56.657244 | orchestrator | } 2026-01-08 00:55:56.657248 | orchestrator | 2026-01-08 00:55:56.657252 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:55:56.657255 | orchestrator | Thursday 08 January 2026 00:55:09 +0000 (0:00:00.395) 0:06:02.067 ****** 2026-01-08 00:55:56.657262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.657269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.657276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.657280 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.657288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.657292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.657296 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-08 00:55:56.657309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2025.1', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-08 00:55:56.657314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-08 00:55:56.657318 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657321 | orchestrator | 2026-01-08 00:55:56.657325 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-08 00:55:56.657329 | orchestrator | Thursday 08 January 2026 00:55:11 +0000 (0:00:01.710) 0:06:03.777 ****** 2026-01-08 00:55:56.657333 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.657337 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.657341 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.657345 | orchestrator | 2026-01-08 00:55:56.657349 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-08 00:55:56.657355 | orchestrator | Thursday 08 January 2026 00:55:12 +0000 (0:00:01.052) 0:06:04.829 ****** 2026-01-08 00:55:56.657359 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.657363 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.657367 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.657384 | orchestrator | 2026-01-08 00:55:56.657388 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-08 00:55:56.657393 | orchestrator | Thursday 08 January 2026 00:55:12 +0000 (0:00:00.371) 0:06:05.201 ****** 2026-01-08 00:55:56.657396 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.657400 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.657405 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.657411 | orchestrator | 2026-01-08 00:55:56.657417 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-08 00:55:56.657422 | orchestrator | Thursday 08 January 2026 00:55:13 +0000 (0:00:00.909) 0:06:06.111 ****** 2026-01-08 00:55:56.657426 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.657430 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.657434 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.657437 | orchestrator | 2026-01-08 00:55:56.657441 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-08 00:55:56.657445 | orchestrator | Thursday 08 January 2026 00:55:14 +0000 (0:00:00.907) 0:06:07.018 ****** 2026-01-08 00:55:56.657449 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.657453 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.657456 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.657460 | orchestrator | 2026-01-08 00:55:56.657464 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-08 00:55:56.657468 | orchestrator | Thursday 08 January 2026 00:55:15 +0000 (0:00:01.265) 0:06:08.283 ****** 2026-01-08 00:55:56.657472 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.657475 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.657479 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.657488 | orchestrator | 2026-01-08 00:55:56.657494 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-08 00:55:56.657499 | orchestrator | Thursday 08 January 2026 00:55:24 +0000 (0:00:08.502) 0:06:16.786 ****** 2026-01-08 00:55:56.657503 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.657507 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.657514 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.657519 | orchestrator | 2026-01-08 00:55:56.657522 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-08 00:55:56.657526 | orchestrator | Thursday 08 January 2026 00:55:25 +0000 (0:00:00.790) 0:06:17.576 ****** 2026-01-08 00:55:56.657530 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.657534 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.657538 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.657542 | orchestrator | 2026-01-08 00:55:56.657545 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-08 00:55:56.657549 | orchestrator | Thursday 08 January 2026 00:55:38 +0000 (0:00:13.269) 0:06:30.846 ****** 2026-01-08 00:55:56.657553 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.657557 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.657561 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.657564 | orchestrator | 2026-01-08 00:55:56.657568 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-08 00:55:56.657572 | orchestrator | Thursday 08 January 2026 00:55:40 +0000 (0:00:01.529) 0:06:32.375 ****** 2026-01-08 00:55:56.657576 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:55:56.657580 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:55:56.657584 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:55:56.657587 | orchestrator | 2026-01-08 00:55:56.657591 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-08 00:55:56.657595 | orchestrator | Thursday 08 January 2026 00:55:43 +0000 (0:00:03.923) 0:06:36.299 ****** 2026-01-08 00:55:56.657599 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657602 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657606 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657610 | orchestrator | 2026-01-08 00:55:56.657614 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-08 00:55:56.657618 | orchestrator | Thursday 08 January 2026 00:55:44 +0000 (0:00:00.350) 0:06:36.649 ****** 2026-01-08 00:55:56.657621 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657628 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657632 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657635 | orchestrator | 2026-01-08 00:55:56.657639 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-08 00:55:56.657643 | orchestrator | Thursday 08 January 2026 00:55:44 +0000 (0:00:00.401) 0:06:37.051 ****** 2026-01-08 00:55:56.657647 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657650 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657654 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657658 | orchestrator | 2026-01-08 00:55:56.657662 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-08 00:55:56.657666 | orchestrator | Thursday 08 January 2026 00:55:45 +0000 (0:00:00.714) 0:06:37.765 ****** 2026-01-08 00:55:56.657669 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657673 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657677 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657681 | orchestrator | 2026-01-08 00:55:56.657684 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-08 00:55:56.657688 | orchestrator | Thursday 08 January 2026 00:55:45 +0000 (0:00:00.367) 0:06:38.132 ****** 2026-01-08 00:55:56.657692 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657696 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657700 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657703 | orchestrator | 2026-01-08 00:55:56.657707 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-08 00:55:56.657714 | orchestrator | Thursday 08 January 2026 00:55:46 +0000 (0:00:00.407) 0:06:38.540 ****** 2026-01-08 00:55:56.657718 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:55:56.657722 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:55:56.657726 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:55:56.657730 | orchestrator | 2026-01-08 00:55:56.657733 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-08 00:55:56.657737 | orchestrator | Thursday 08 January 2026 00:55:46 +0000 (0:00:00.384) 0:06:38.924 ****** 2026-01-08 00:55:56.657744 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.657748 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.657751 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.657755 | orchestrator | 2026-01-08 00:55:56.657759 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-08 00:55:56.657763 | orchestrator | Thursday 08 January 2026 00:55:51 +0000 (0:00:05.330) 0:06:44.255 ****** 2026-01-08 00:55:56.657766 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:55:56.657770 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:55:56.657774 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:55:56.657778 | orchestrator | 2026-01-08 00:55:56.657782 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:55:56.657786 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-08 00:55:56.657790 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-08 00:55:56.657794 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-08 00:55:56.657798 | orchestrator | 2026-01-08 00:55:56.657802 | orchestrator | 2026-01-08 00:55:56.657806 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:55:56.657809 | orchestrator | Thursday 08 January 2026 00:55:52 +0000 (0:00:00.933) 0:06:45.188 ****** 2026-01-08 00:55:56.657813 | orchestrator | =============================================================================== 2026-01-08 00:55:56.657817 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.27s 2026-01-08 00:55:56.657821 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.50s 2026-01-08 00:55:56.657824 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.75s 2026-01-08 00:55:56.657828 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.81s 2026-01-08 00:55:56.657832 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.37s 2026-01-08 00:55:56.657836 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.75s 2026-01-08 00:55:56.657839 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.58s 2026-01-08 00:55:56.657843 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.33s 2026-01-08 00:55:56.657847 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.12s 2026-01-08 00:55:56.657851 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.02s 2026-01-08 00:55:56.657854 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.82s 2026-01-08 00:55:56.657858 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.58s 2026-01-08 00:55:56.657862 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.50s 2026-01-08 00:55:56.657866 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.44s 2026-01-08 00:55:56.657869 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.41s 2026-01-08 00:55:56.657873 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.41s 2026-01-08 00:55:56.657880 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.28s 2026-01-08 00:55:56.657884 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.06s 2026-01-08 00:55:56.657888 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.04s 2026-01-08 00:55:56.657892 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.93s 2026-01-08 00:55:56.657897 | orchestrator | 2026-01-08 00:55:56 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:55:56.657902 | orchestrator | 2026-01-08 00:55:56 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:55:56.657906 | orchestrator | 2026-01-08 00:55:56 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:56.657910 | orchestrator | 2026-01-08 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:55:59.709229 | orchestrator | 2026-01-08 00:55:59 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:55:59.711750 | orchestrator | 2026-01-08 00:55:59 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:55:59.713580 | orchestrator | 2026-01-08 00:55:59 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:55:59.713656 | orchestrator | 2026-01-08 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:02.755000 | orchestrator | 2026-01-08 00:56:02 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:02.755571 | orchestrator | 2026-01-08 00:56:02 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:02.756614 | orchestrator | 2026-01-08 00:56:02 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:02.756657 | orchestrator | 2026-01-08 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:05.821773 | orchestrator | 2026-01-08 00:56:05 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:05.824020 | orchestrator | 2026-01-08 00:56:05 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:05.824103 | orchestrator | 2026-01-08 00:56:05 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:05.824245 | orchestrator | 2026-01-08 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:08.854696 | orchestrator | 2026-01-08 00:56:08 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:08.854786 | orchestrator | 2026-01-08 00:56:08 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:08.855573 | orchestrator | 2026-01-08 00:56:08 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:08.855605 | orchestrator | 2026-01-08 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:11.893916 | orchestrator | 2026-01-08 00:56:11 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:11.895530 | orchestrator | 2026-01-08 00:56:11 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:11.897657 | orchestrator | 2026-01-08 00:56:11 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:11.897760 | orchestrator | 2026-01-08 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:14.939243 | orchestrator | 2026-01-08 00:56:14 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:14.939473 | orchestrator | 2026-01-08 00:56:14 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:14.939934 | orchestrator | 2026-01-08 00:56:14 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:14.939976 | orchestrator | 2026-01-08 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:17.979922 | orchestrator | 2026-01-08 00:56:17 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:17.980592 | orchestrator | 2026-01-08 00:56:17 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:17.982186 | orchestrator | 2026-01-08 00:56:17 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:17.982239 | orchestrator | 2026-01-08 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:21.127847 | orchestrator | 2026-01-08 00:56:21 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:21.128519 | orchestrator | 2026-01-08 00:56:21 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:21.129648 | orchestrator | 2026-01-08 00:56:21 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:21.129674 | orchestrator | 2026-01-08 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:24.162807 | orchestrator | 2026-01-08 00:56:24 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:24.166244 | orchestrator | 2026-01-08 00:56:24 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:24.168520 | orchestrator | 2026-01-08 00:56:24 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:24.169226 | orchestrator | 2026-01-08 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:27.196649 | orchestrator | 2026-01-08 00:56:27 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:27.197569 | orchestrator | 2026-01-08 00:56:27 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:27.199297 | orchestrator | 2026-01-08 00:56:27 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:27.199356 | orchestrator | 2026-01-08 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:30.256512 | orchestrator | 2026-01-08 00:56:30 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:30.260114 | orchestrator | 2026-01-08 00:56:30 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:30.262429 | orchestrator | 2026-01-08 00:56:30 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:30.262496 | orchestrator | 2026-01-08 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:33.309948 | orchestrator | 2026-01-08 00:56:33 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:33.314394 | orchestrator | 2026-01-08 00:56:33 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:33.315053 | orchestrator | 2026-01-08 00:56:33 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:33.318271 | orchestrator | 2026-01-08 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:36.367221 | orchestrator | 2026-01-08 00:56:36 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:36.367311 | orchestrator | 2026-01-08 00:56:36 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:36.368487 | orchestrator | 2026-01-08 00:56:36 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:36.368519 | orchestrator | 2026-01-08 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:39.402391 | orchestrator | 2026-01-08 00:56:39 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:39.403876 | orchestrator | 2026-01-08 00:56:39 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:39.405702 | orchestrator | 2026-01-08 00:56:39 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:39.405775 | orchestrator | 2026-01-08 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:42.457147 | orchestrator | 2026-01-08 00:56:42 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:42.459686 | orchestrator | 2026-01-08 00:56:42 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:42.463223 | orchestrator | 2026-01-08 00:56:42 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:42.463298 | orchestrator | 2026-01-08 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:45.503664 | orchestrator | 2026-01-08 00:56:45 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:45.508962 | orchestrator | 2026-01-08 00:56:45 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:45.510962 | orchestrator | 2026-01-08 00:56:45 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:45.511005 | orchestrator | 2026-01-08 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:48.557061 | orchestrator | 2026-01-08 00:56:48 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:48.561480 | orchestrator | 2026-01-08 00:56:48 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:48.562625 | orchestrator | 2026-01-08 00:56:48 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:48.562763 | orchestrator | 2026-01-08 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:51.609132 | orchestrator | 2026-01-08 00:56:51 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:51.611263 | orchestrator | 2026-01-08 00:56:51 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:51.613406 | orchestrator | 2026-01-08 00:56:51 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:51.613767 | orchestrator | 2026-01-08 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:54.658269 | orchestrator | 2026-01-08 00:56:54 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:54.660749 | orchestrator | 2026-01-08 00:56:54 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:54.664182 | orchestrator | 2026-01-08 00:56:54 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:54.664280 | orchestrator | 2026-01-08 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:56:57.709707 | orchestrator | 2026-01-08 00:56:57 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:56:57.711843 | orchestrator | 2026-01-08 00:56:57 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:56:57.714420 | orchestrator | 2026-01-08 00:56:57 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:56:57.714540 | orchestrator | 2026-01-08 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:00.765543 | orchestrator | 2026-01-08 00:57:00 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:00.767000 | orchestrator | 2026-01-08 00:57:00 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:00.770588 | orchestrator | 2026-01-08 00:57:00 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:00.770656 | orchestrator | 2026-01-08 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:03.822353 | orchestrator | 2026-01-08 00:57:03 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:03.822962 | orchestrator | 2026-01-08 00:57:03 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:03.825977 | orchestrator | 2026-01-08 00:57:03 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:03.826068 | orchestrator | 2026-01-08 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:06.876931 | orchestrator | 2026-01-08 00:57:06 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:06.879044 | orchestrator | 2026-01-08 00:57:06 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:06.881147 | orchestrator | 2026-01-08 00:57:06 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:06.881238 | orchestrator | 2026-01-08 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:09.929076 | orchestrator | 2026-01-08 00:57:09 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:09.931512 | orchestrator | 2026-01-08 00:57:09 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:09.934801 | orchestrator | 2026-01-08 00:57:09 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:09.934871 | orchestrator | 2026-01-08 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:12.983838 | orchestrator | 2026-01-08 00:57:12 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:12.986706 | orchestrator | 2026-01-08 00:57:12 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:12.989283 | orchestrator | 2026-01-08 00:57:12 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:12.989343 | orchestrator | 2026-01-08 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:16.038438 | orchestrator | 2026-01-08 00:57:16 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:16.038512 | orchestrator | 2026-01-08 00:57:16 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:16.038521 | orchestrator | 2026-01-08 00:57:16 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:16.038528 | orchestrator | 2026-01-08 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:19.147819 | orchestrator | 2026-01-08 00:57:19 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:19.149902 | orchestrator | 2026-01-08 00:57:19 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:19.153578 | orchestrator | 2026-01-08 00:57:19 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:19.153852 | orchestrator | 2026-01-08 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:22.200822 | orchestrator | 2026-01-08 00:57:22 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:22.202489 | orchestrator | 2026-01-08 00:57:22 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:22.204266 | orchestrator | 2026-01-08 00:57:22 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:22.204313 | orchestrator | 2026-01-08 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:25.250093 | orchestrator | 2026-01-08 00:57:25 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:25.252050 | orchestrator | 2026-01-08 00:57:25 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:25.253412 | orchestrator | 2026-01-08 00:57:25 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:25.253483 | orchestrator | 2026-01-08 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:28.303054 | orchestrator | 2026-01-08 00:57:28 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:28.305346 | orchestrator | 2026-01-08 00:57:28 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:28.307424 | orchestrator | 2026-01-08 00:57:28 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:28.307474 | orchestrator | 2026-01-08 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:31.357543 | orchestrator | 2026-01-08 00:57:31 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:31.360478 | orchestrator | 2026-01-08 00:57:31 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:31.362446 | orchestrator | 2026-01-08 00:57:31 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:31.362497 | orchestrator | 2026-01-08 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:34.408064 | orchestrator | 2026-01-08 00:57:34 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:34.409102 | orchestrator | 2026-01-08 00:57:34 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:34.410452 | orchestrator | 2026-01-08 00:57:34 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:34.410477 | orchestrator | 2026-01-08 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:37.460888 | orchestrator | 2026-01-08 00:57:37 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:37.464772 | orchestrator | 2026-01-08 00:57:37 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:37.467363 | orchestrator | 2026-01-08 00:57:37 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:37.467420 | orchestrator | 2026-01-08 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:40.504142 | orchestrator | 2026-01-08 00:57:40 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:40.504200 | orchestrator | 2026-01-08 00:57:40 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:40.504976 | orchestrator | 2026-01-08 00:57:40 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state STARTED 2026-01-08 00:57:40.505002 | orchestrator | 2026-01-08 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:43.553889 | orchestrator | 2026-01-08 00:57:43 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:43.555666 | orchestrator | 2026-01-08 00:57:43 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:43.562058 | orchestrator | 2026-01-08 00:57:43 | INFO  | Task 0a758170-ac07-460c-a4e5-8f1f1386f9d5 is in state SUCCESS 2026-01-08 00:57:43.563590 | orchestrator | 2026-01-08 00:57:43.563644 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-08 00:57:43.563650 | orchestrator | 2.16.14 2026-01-08 00:57:43.563655 | orchestrator | 2026-01-08 00:57:43.563660 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-08 00:57:43.563665 | orchestrator | 2026-01-08 00:57:43.563670 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-08 00:57:43.563677 | orchestrator | Thursday 08 January 2026 00:46:31 +0000 (0:00:00.596) 0:00:00.596 ****** 2026-01-08 00:57:43.563688 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.563695 | orchestrator | 2026-01-08 00:57:43.563701 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-08 00:57:43.563708 | orchestrator | Thursday 08 January 2026 00:46:32 +0000 (0:00:01.083) 0:00:01.679 ****** 2026-01-08 00:57:43.563714 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.563720 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.563726 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.563732 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.563738 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.563744 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.563749 | orchestrator | 2026-01-08 00:57:43.563754 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-08 00:57:43.563761 | orchestrator | Thursday 08 January 2026 00:46:34 +0000 (0:00:01.783) 0:00:03.463 ****** 2026-01-08 00:57:43.563767 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.563773 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.563780 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.563786 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.563792 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.563799 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.563804 | orchestrator | 2026-01-08 00:57:43.563809 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-08 00:57:43.563813 | orchestrator | Thursday 08 January 2026 00:46:34 +0000 (0:00:00.756) 0:00:04.219 ****** 2026-01-08 00:57:43.563828 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.563832 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.563836 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.563839 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.563843 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.563847 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.563851 | orchestrator | 2026-01-08 00:57:43.563855 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-08 00:57:43.563859 | orchestrator | Thursday 08 January 2026 00:46:36 +0000 (0:00:01.111) 0:00:05.330 ****** 2026-01-08 00:57:43.563863 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.563866 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.563870 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.563874 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.563877 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.563881 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.563886 | orchestrator | 2026-01-08 00:57:43.563892 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-08 00:57:43.563898 | orchestrator | Thursday 08 January 2026 00:46:36 +0000 (0:00:00.784) 0:00:06.114 ****** 2026-01-08 00:57:43.563904 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.563910 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.563917 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.563923 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.564001 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.564008 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.564011 | orchestrator | 2026-01-08 00:57:43.564015 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-08 00:57:43.564019 | orchestrator | Thursday 08 January 2026 00:46:37 +0000 (0:00:00.621) 0:00:06.736 ****** 2026-01-08 00:57:43.564023 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.564027 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.564031 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.564035 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.564038 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.564042 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.564046 | orchestrator | 2026-01-08 00:57:43.564050 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-08 00:57:43.564054 | orchestrator | Thursday 08 January 2026 00:46:39 +0000 (0:00:01.774) 0:00:08.510 ****** 2026-01-08 00:57:43.564278 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.564287 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.564291 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.564295 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.564299 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.564303 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.564307 | orchestrator | 2026-01-08 00:57:43.564311 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-08 00:57:43.564314 | orchestrator | Thursday 08 January 2026 00:46:39 +0000 (0:00:00.693) 0:00:09.204 ****** 2026-01-08 00:57:43.564318 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.564322 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.564326 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.564330 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.564333 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.564337 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.564341 | orchestrator | 2026-01-08 00:57:43.564345 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-08 00:57:43.564349 | orchestrator | Thursday 08 January 2026 00:46:40 +0000 (0:00:00.905) 0:00:10.110 ****** 2026-01-08 00:57:43.564353 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-08 00:57:43.564356 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:57:43.564360 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:57:43.564364 | orchestrator | 2026-01-08 00:57:43.564368 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-08 00:57:43.564372 | orchestrator | Thursday 08 January 2026 00:46:41 +0000 (0:00:00.677) 0:00:10.788 ****** 2026-01-08 00:57:43.564376 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.564379 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.564383 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.564397 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.564401 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.564405 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.564409 | orchestrator | 2026-01-08 00:57:43.564413 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-08 00:57:43.564416 | orchestrator | Thursday 08 January 2026 00:46:42 +0000 (0:00:01.273) 0:00:12.061 ****** 2026-01-08 00:57:43.564420 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-08 00:57:43.564424 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:57:43.564428 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:57:43.564432 | orchestrator | 2026-01-08 00:57:43.564435 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-08 00:57:43.564439 | orchestrator | Thursday 08 January 2026 00:46:46 +0000 (0:00:03.652) 0:00:15.714 ****** 2026-01-08 00:57:43.564450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-08 00:57:43.564454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-08 00:57:43.564458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-08 00:57:43.564461 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.564465 | orchestrator | 2026-01-08 00:57:43.564469 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-08 00:57:43.564473 | orchestrator | Thursday 08 January 2026 00:46:47 +0000 (0:00:00.977) 0:00:16.691 ****** 2026-01-08 00:57:43.564479 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.564489 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.564493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.564497 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.564501 | orchestrator | 2026-01-08 00:57:43.564505 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-08 00:57:43.564509 | orchestrator | Thursday 08 January 2026 00:46:48 +0000 (0:00:01.033) 0:00:17.725 ****** 2026-01-08 00:57:43.564514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.564520 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.564524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.564528 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.564532 | orchestrator | 2026-01-08 00:57:43.564535 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-08 00:57:43.564539 | orchestrator | Thursday 08 January 2026 00:46:48 +0000 (0:00:00.326) 0:00:18.052 ****** 2026-01-08 00:57:43.564549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-08 00:46:43.814672', 'end': '2026-01-08 00:46:44.072630', 'delta': '0:00:00.257958', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.564560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-08 00:46:44.960375', 'end': '2026-01-08 00:46:45.268805', 'delta': '0:00:00.308430', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.564567 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-08 00:46:45.897984', 'end': '2026-01-08 00:46:46.209981', 'delta': '0:00:00.311997', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.564571 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.564575 | orchestrator | 2026-01-08 00:57:43.564579 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-08 00:57:43.564582 | orchestrator | Thursday 08 January 2026 00:46:49 +0000 (0:00:00.217) 0:00:18.269 ****** 2026-01-08 00:57:43.564586 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.564590 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.564594 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.564598 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.564601 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.564605 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.564609 | orchestrator | 2026-01-08 00:57:43.564613 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-08 00:57:43.564617 | orchestrator | Thursday 08 January 2026 00:46:50 +0000 (0:00:01.817) 0:00:20.086 ****** 2026-01-08 00:57:43.564775 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.564780 | orchestrator | 2026-01-08 00:57:43.564784 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-08 00:57:43.564788 | orchestrator | Thursday 08 January 2026 00:46:51 +0000 (0:00:00.870) 0:00:20.957 ****** 2026-01-08 00:57:43.564792 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.564796 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.564800 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.564803 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.564810 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.564817 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.564827 | orchestrator | 2026-01-08 00:57:43.564834 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-08 00:57:43.564841 | orchestrator | Thursday 08 January 2026 00:46:52 +0000 (0:00:01.106) 0:00:22.063 ****** 2026-01-08 00:57:43.564847 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.564853 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.564859 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.564865 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.564871 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.564877 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.564882 | orchestrator | 2026-01-08 00:57:43.564888 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-08 00:57:43.564963 | orchestrator | Thursday 08 January 2026 00:46:53 +0000 (0:00:00.886) 0:00:22.949 ****** 2026-01-08 00:57:43.564973 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.564980 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.564986 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.564993 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.565183 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.565189 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.565193 | orchestrator | 2026-01-08 00:57:43.565205 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-08 00:57:43.565209 | orchestrator | Thursday 08 January 2026 00:46:54 +0000 (0:00:00.944) 0:00:23.893 ****** 2026-01-08 00:57:43.565213 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.565217 | orchestrator | 2026-01-08 00:57:43.565221 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-08 00:57:43.565225 | orchestrator | Thursday 08 January 2026 00:46:54 +0000 (0:00:00.244) 0:00:24.138 ****** 2026-01-08 00:57:43.565228 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.565261 | orchestrator | 2026-01-08 00:57:43.565265 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-08 00:57:43.565269 | orchestrator | Thursday 08 January 2026 00:46:55 +0000 (0:00:00.255) 0:00:24.394 ****** 2026-01-08 00:57:43.565273 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.565277 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.565281 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.565299 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.565304 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.565308 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.565312 | orchestrator | 2026-01-08 00:57:43.565316 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-08 00:57:43.565320 | orchestrator | Thursday 08 January 2026 00:46:55 +0000 (0:00:00.697) 0:00:25.092 ****** 2026-01-08 00:57:43.565324 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.565328 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.565332 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.565336 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.565340 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.565346 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.565352 | orchestrator | 2026-01-08 00:57:43.565359 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-08 00:57:43.565403 | orchestrator | Thursday 08 January 2026 00:46:57 +0000 (0:00:01.250) 0:00:26.343 ****** 2026-01-08 00:57:43.565412 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.565419 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.565425 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.565429 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.565434 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.565440 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.565446 | orchestrator | 2026-01-08 00:57:43.565453 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-08 00:57:43.565459 | orchestrator | Thursday 08 January 2026 00:46:57 +0000 (0:00:00.836) 0:00:27.180 ****** 2026-01-08 00:57:43.565465 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.565471 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.565477 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.565652 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.565661 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.565665 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.565669 | orchestrator | 2026-01-08 00:57:43.565673 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-08 00:57:43.565682 | orchestrator | Thursday 08 January 2026 00:46:58 +0000 (0:00:00.826) 0:00:28.006 ****** 2026-01-08 00:57:43.565686 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.565696 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.565700 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.565703 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.565707 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.565711 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.565715 | orchestrator | 2026-01-08 00:57:43.565719 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-08 00:57:43.565723 | orchestrator | Thursday 08 January 2026 00:46:59 +0000 (0:00:00.743) 0:00:28.750 ****** 2026-01-08 00:57:43.565726 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.565730 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.565734 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.565738 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.565742 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.565746 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.565750 | orchestrator | 2026-01-08 00:57:43.565754 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-08 00:57:43.565758 | orchestrator | Thursday 08 January 2026 00:47:00 +0000 (0:00:00.757) 0:00:29.507 ****** 2026-01-08 00:57:43.565762 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.565765 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.565769 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.565773 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.565793 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.565798 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.565802 | orchestrator | 2026-01-08 00:57:43.565806 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-08 00:57:43.565810 | orchestrator | Thursday 08 January 2026 00:47:01 +0000 (0:00:00.828) 0:00:30.335 ****** 2026-01-08 00:57:43.565849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2587794--ee13--56a9--b71d--149b2fd55b33-osd--block--a2587794--ee13--56a9--b71d--149b2fd55b33', 'dm-uuid-LVM-bEENzoABaKlXIVix9f7oeh01iGEYhwYje5ILS3OJKDIlIgaK2J1mQi3X1kQLOMsS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--703f1367--865b--52a8--8f96--c728fe171d20-osd--block--703f1367--865b--52a8--8f96--c728fe171d20', 'dm-uuid-LVM-HNtczakD2ja3G1Vo2m6WZZaGI4em8Ptu21dROg2ZrlYRnSlALMwv07zh00XTq3Jz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.565946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part1', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part14', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part15', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part16', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.565965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a2587794--ee13--56a9--b71d--149b2fd55b33-osd--block--a2587794--ee13--56a9--b71d--149b2fd55b33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1XbyXP-e8gb-I4vr-LRiY-ExbH-DF2J-vfvv0K', 'scsi-0QEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82', 'scsi-SQEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.565972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--703f1367--865b--52a8--8f96--c728fe171d20-osd--block--703f1367--865b--52a8--8f96--c728fe171d20'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8z1lYd-cIsw-fkBh-faUa-wXb7-czkT-xlmZfI', 'scsi-0QEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea', 'scsi-SQEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.565979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb', 'scsi-SQEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.565987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--738668c3--85d9--5999--8ba6--58353e2d69fe-osd--block--738668c3--85d9--5999--8ba6--58353e2d69fe', 'dm-uuid-LVM-V6XNBw63PUUGEqjR32uErniLBklwwxqrPlXQfKTbKJqfnHVANQQvB8h0YVlx4Mow'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3efd50ac--0c86--56a3--96dd--80e79744aaab-osd--block--3efd50ac--0c86--56a3--96dd--80e79744aaab', 'dm-uuid-LVM-AdLH4Bzf4albY0ZKx0mHhTp84P7qfNw2mJdNadtBaDkMxt1H9L9WLPqmqsmsA33M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e7c35fc3--220b--5a3c--9d36--601219d17f28-osd--block--e7c35fc3--220b--5a3c--9d36--601219d17f28', 'dm-uuid-LVM-XuDkDnBuxUAkQxcjcNSDHfD1ciReVtRA6WECqtp650LVlgJO2Ty9MSobiJ12IINR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1538380d--5182--5482--9616--e6fa16e7f592-osd--block--1538380d--5182--5482--9616--e6fa16e7f592', 'dm-uuid-LVM-nEL93Xw1a3nKIgzDWGLG37Ki0dfG35InHkcT1Pv33vgx09JnaiRx3cG9AVL1mEMd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--738668c3--85d9--5999--8ba6--58353e2d69fe-osd--block--738668c3--85d9--5999--8ba6--58353e2d69fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SKcPqy-GBza-EcQ9-39iv-knfu-8xbO-YhTNYD', 'scsi-0QEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b', 'scsi-SQEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566766 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3efd50ac--0c86--56a3--96dd--80e79744aaab-osd--block--3efd50ac--0c86--56a3--96dd--80e79744aaab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1XcJBj-e4Zo-3zFx-nX02-BuBE-lIU7-9n3Hhr', 'scsi-0QEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181', 'scsi-SQEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd', 'scsi-SQEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566817 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.566822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e7c35fc3--220b--5a3c--9d36--601219d17f28-osd--block--e7c35fc3--220b--5a3c--9d36--601219d17f28'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WYjp2M-oBFi-jndj-unti-K3JK-LgdU-NtU3Qm', 'scsi-0QEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490', 'scsi-SQEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1538380d--5182--5482--9616--e6fa16e7f592-osd--block--1538380d--5182--5482--9616--e6fa16e7f592'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fQ8LPb-Dq42-HIia-0FfU-1SGY-R4UN-qL0FTy', 'scsi-0QEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0', 'scsi-SQEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42', 'scsi-SQEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.566921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.566969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.567002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.567007 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.567011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567037 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.567041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567047 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.567052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.567719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.567727 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.567734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:57:43.567859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part1', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part14', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part15', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part16', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.567891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:57:43.567896 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.567901 | orchestrator | 2026-01-08 00:57:43.567912 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-08 00:57:43.567921 | orchestrator | Thursday 08 January 2026 00:47:02 +0000 (0:00:01.628) 0:00:31.964 ****** 2026-01-08 00:57:43.567929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2587794--ee13--56a9--b71d--149b2fd55b33-osd--block--a2587794--ee13--56a9--b71d--149b2fd55b33', 'dm-uuid-LVM-bEENzoABaKlXIVix9f7oeh01iGEYhwYje5ILS3OJKDIlIgaK2J1mQi3X1kQLOMsS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.567939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--703f1367--865b--52a8--8f96--c728fe171d20-osd--block--703f1367--865b--52a8--8f96--c728fe171d20', 'dm-uuid-LVM-HNtczakD2ja3G1Vo2m6WZZaGI4em8Ptu21dROg2ZrlYRnSlALMwv07zh00XTq3Jz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.567945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.567955 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.567961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568014 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568025 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568037 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--738668c3--85d9--5999--8ba6--58353e2d69fe-osd--block--738668c3--85d9--5999--8ba6--58353e2d69fe', 'dm-uuid-LVM-V6XNBw63PUUGEqjR32uErniLBklwwxqrPlXQfKTbKJqfnHVANQQvB8h0YVlx4Mow'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568045 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3efd50ac--0c86--56a3--96dd--80e79744aaab-osd--block--3efd50ac--0c86--56a3--96dd--80e79744aaab', 'dm-uuid-LVM-AdLH4Bzf4albY0ZKx0mHhTp84P7qfNw2mJdNadtBaDkMxt1H9L9WLPqmqsmsA33M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568129 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part1', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part14', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part15', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part16', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568157 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a2587794--ee13--56a9--b71d--149b2fd55b33-osd--block--a2587794--ee13--56a9--b71d--149b2fd55b33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1XbyXP-e8gb-I4vr-LRiY-ExbH-DF2J-vfvv0K', 'scsi-0QEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82', 'scsi-SQEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568213 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--703f1367--865b--52a8--8f96--c728fe171d20-osd--block--703f1367--865b--52a8--8f96--c728fe171d20'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8z1lYd-cIsw-fkBh-faUa-wXb7-czkT-xlmZfI', 'scsi-0QEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea', 'scsi-SQEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568252 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb', 'scsi-SQEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568259 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e7c35fc3--220b--5a3c--9d36--601219d17f28-osd--block--e7c35fc3--220b--5a3c--9d36--601219d17f28', 'dm-uuid-LVM-XuDkDnBuxUAkQxcjcNSDHfD1ciReVtRA6WECqtp650LVlgJO2Ty9MSobiJ12IINR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568320 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1538380d--5182--5482--9616--e6fa16e7f592-osd--block--1538380d--5182--5482--9616--e6fa16e7f592', 'dm-uuid-LVM-nEL93Xw1a3nKIgzDWGLG37Ki0dfG35InHkcT1Pv33vgx09JnaiRx3cG9AVL1mEMd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568329 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568333 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568337 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568366 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568371 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568377 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568382 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.568392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568400 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568445 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568457 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--738668c3--85d9--5999--8ba6--58353e2d69fe-osd--block--738668c3--85d9--5999--8ba6--58353e2d69fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SKcPqy-GBza-EcQ9-39iv-knfu-8xbO-YhTNYD', 'scsi-0QEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b', 'scsi-SQEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568491 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568497 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3efd50ac--0c86--56a3--96dd--80e79744aaab-osd--block--3efd50ac--0c86--56a3--96dd--80e79744aaab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1XcJBj-e4Zo-3zFx-nX02-BuBE-lIU7-9n3Hhr', 'scsi-0QEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181', 'scsi-SQEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568510 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568549 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568563 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd', 'scsi-SQEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e7c35fc3--220b--5a3c--9d36--601219d17f28-osd--block--e7c35fc3--220b--5a3c--9d36--601219d17f28'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WYjp2M-oBFi-jndj-unti-K3JK-LgdU-NtU3Qm', 'scsi-0QEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490', 'scsi-SQEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568576 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568630 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1538380d--5182--5482--9616--e6fa16e7f592-osd--block--1538380d--5182--5482--9616--e6fa16e7f592'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fQ8LPb-Dq42-HIia-0FfU-1SGY-R4UN-qL0FTy', 'scsi-0QEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0', 'scsi-SQEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42', 'scsi-SQEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568656 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568662 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568668 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568712 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568719 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568729 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0ea0a032-e520-451d-8180-d4c0b00694e2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568734 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.568767 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568773 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.568777 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.568781 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568790 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568794 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568798 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568802 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568806 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568852 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568865 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568875 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568882 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568889 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568896 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568941 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568953 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part1', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part14', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part15', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part16', 'scsi-SQEMU_QEMU_HARDDISK_f0f9a47d-1724-4561-93ab-95a64de5d531-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568962 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568966 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.568970 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.569000 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.569008 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.569015 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_b666bc2c-7776-4612-aae4-ea993c4606ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.569020 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:57:43.569024 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.569028 | orchestrator | 2026-01-08 00:57:43.569055 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-08 00:57:43.569064 | orchestrator | Thursday 08 January 2026 00:47:04 +0000 (0:00:01.755) 0:00:33.719 ****** 2026-01-08 00:57:43.569068 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.569072 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.569076 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.569081 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.569087 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.569199 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.569203 | orchestrator | 2026-01-08 00:57:43.569207 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-08 00:57:43.569212 | orchestrator | Thursday 08 January 2026 00:47:06 +0000 (0:00:01.788) 0:00:35.507 ****** 2026-01-08 00:57:43.569215 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.569219 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.569223 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.569231 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.569235 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.569239 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.569243 | orchestrator | 2026-01-08 00:57:43.569247 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-08 00:57:43.569251 | orchestrator | Thursday 08 January 2026 00:47:07 +0000 (0:00:00.846) 0:00:36.354 ****** 2026-01-08 00:57:43.569254 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569258 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.569262 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.569266 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.569270 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.569274 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.569277 | orchestrator | 2026-01-08 00:57:43.569281 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-08 00:57:43.569285 | orchestrator | Thursday 08 January 2026 00:47:07 +0000 (0:00:00.751) 0:00:37.106 ****** 2026-01-08 00:57:43.569289 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569292 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.569296 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.569300 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.569304 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.569311 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.569315 | orchestrator | 2026-01-08 00:57:43.569318 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-08 00:57:43.569322 | orchestrator | Thursday 08 January 2026 00:47:08 +0000 (0:00:00.739) 0:00:37.846 ****** 2026-01-08 00:57:43.569326 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569330 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.569334 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.569337 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.569341 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.569345 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.569349 | orchestrator | 2026-01-08 00:57:43.569352 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-08 00:57:43.569356 | orchestrator | Thursday 08 January 2026 00:47:09 +0000 (0:00:00.635) 0:00:38.481 ****** 2026-01-08 00:57:43.569360 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569364 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.569368 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.569371 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.569375 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.569379 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.569383 | orchestrator | 2026-01-08 00:57:43.569386 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-08 00:57:43.569390 | orchestrator | Thursday 08 January 2026 00:47:09 +0000 (0:00:00.457) 0:00:38.939 ****** 2026-01-08 00:57:43.569394 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-08 00:57:43.569402 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-08 00:57:43.569406 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-08 00:57:43.569410 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-08 00:57:43.569414 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-08 00:57:43.569417 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-08 00:57:43.569421 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-08 00:57:43.569425 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-08 00:57:43.569429 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-08 00:57:43.569432 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-08 00:57:43.569436 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-08 00:57:43.569440 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-08 00:57:43.569448 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-08 00:57:43.569452 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-08 00:57:43.569456 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-08 00:57:43.569460 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-08 00:57:43.569464 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-08 00:57:43.569467 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-08 00:57:43.569471 | orchestrator | 2026-01-08 00:57:43.569475 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-08 00:57:43.569479 | orchestrator | Thursday 08 January 2026 00:47:12 +0000 (0:00:03.065) 0:00:42.004 ****** 2026-01-08 00:57:43.569482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-08 00:57:43.569486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-08 00:57:43.569490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-08 00:57:43.569494 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569497 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-08 00:57:43.569501 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-08 00:57:43.569505 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-08 00:57:43.569509 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.569513 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-08 00:57:43.569544 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-08 00:57:43.569549 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-08 00:57:43.569553 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.569556 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-08 00:57:43.569560 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-08 00:57:43.569564 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-08 00:57:43.569568 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.569571 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-08 00:57:43.569575 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-08 00:57:43.569579 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-08 00:57:43.569583 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.569586 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-08 00:57:43.569590 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-08 00:57:43.569594 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-08 00:57:43.569598 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.569601 | orchestrator | 2026-01-08 00:57:43.569605 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-08 00:57:43.569609 | orchestrator | Thursday 08 January 2026 00:47:13 +0000 (0:00:00.888) 0:00:42.893 ****** 2026-01-08 00:57:43.569613 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.569619 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.569623 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.569628 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.569632 | orchestrator | 2026-01-08 00:57:43.569635 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-08 00:57:43.569643 | orchestrator | Thursday 08 January 2026 00:47:15 +0000 (0:00:01.349) 0:00:44.243 ****** 2026-01-08 00:57:43.569647 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569651 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.569655 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.569658 | orchestrator | 2026-01-08 00:57:43.569662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-08 00:57:43.569666 | orchestrator | Thursday 08 January 2026 00:47:15 +0000 (0:00:00.456) 0:00:44.699 ****** 2026-01-08 00:57:43.569670 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569674 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.569677 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.569681 | orchestrator | 2026-01-08 00:57:43.569685 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-08 00:57:43.569689 | orchestrator | Thursday 08 January 2026 00:47:15 +0000 (0:00:00.372) 0:00:45.071 ****** 2026-01-08 00:57:43.569693 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569696 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.569701 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.569706 | orchestrator | 2026-01-08 00:57:43.569710 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-08 00:57:43.569714 | orchestrator | Thursday 08 January 2026 00:47:16 +0000 (0:00:00.561) 0:00:45.633 ****** 2026-01-08 00:57:43.569719 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.569723 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.569728 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.569732 | orchestrator | 2026-01-08 00:57:43.569736 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-08 00:57:43.569740 | orchestrator | Thursday 08 January 2026 00:47:16 +0000 (0:00:00.466) 0:00:46.100 ****** 2026-01-08 00:57:43.569745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.569750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.569756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.569795 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569805 | orchestrator | 2026-01-08 00:57:43.569811 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-08 00:57:43.569818 | orchestrator | Thursday 08 January 2026 00:47:17 +0000 (0:00:00.802) 0:00:46.903 ****** 2026-01-08 00:57:43.569823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.569830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.569845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.569859 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569870 | orchestrator | 2026-01-08 00:57:43.569877 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-08 00:57:43.569883 | orchestrator | Thursday 08 January 2026 00:47:18 +0000 (0:00:00.359) 0:00:47.262 ****** 2026-01-08 00:57:43.569888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.569895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.569902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.569908 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.569914 | orchestrator | 2026-01-08 00:57:43.569920 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-08 00:57:43.569926 | orchestrator | Thursday 08 January 2026 00:47:18 +0000 (0:00:00.487) 0:00:47.752 ****** 2026-01-08 00:57:43.569938 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.569945 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.569951 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.569958 | orchestrator | 2026-01-08 00:57:43.569964 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-08 00:57:43.569971 | orchestrator | Thursday 08 January 2026 00:47:18 +0000 (0:00:00.303) 0:00:48.055 ****** 2026-01-08 00:57:43.569977 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-08 00:57:43.569984 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-08 00:57:43.570053 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-08 00:57:43.570065 | orchestrator | 2026-01-08 00:57:43.570071 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-08 00:57:43.570078 | orchestrator | Thursday 08 January 2026 00:47:19 +0000 (0:00:00.779) 0:00:48.835 ****** 2026-01-08 00:57:43.570086 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-08 00:57:43.570109 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:57:43.570116 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:57:43.570122 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-08 00:57:43.570128 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-08 00:57:43.570134 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-08 00:57:43.570140 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-08 00:57:43.570146 | orchestrator | 2026-01-08 00:57:43.570152 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-08 00:57:43.570158 | orchestrator | Thursday 08 January 2026 00:47:20 +0000 (0:00:00.796) 0:00:49.632 ****** 2026-01-08 00:57:43.570164 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-08 00:57:43.570171 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:57:43.570177 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:57:43.570184 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-08 00:57:43.570190 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-08 00:57:43.570202 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-08 00:57:43.570208 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-08 00:57:43.570214 | orchestrator | 2026-01-08 00:57:43.570220 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-08 00:57:43.570226 | orchestrator | Thursday 08 January 2026 00:47:22 +0000 (0:00:01.945) 0:00:51.577 ****** 2026-01-08 00:57:43.570233 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.570240 | orchestrator | 2026-01-08 00:57:43.570247 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-08 00:57:43.570253 | orchestrator | Thursday 08 January 2026 00:47:23 +0000 (0:00:01.401) 0:00:52.979 ****** 2026-01-08 00:57:43.570259 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.570265 | orchestrator | 2026-01-08 00:57:43.570272 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-08 00:57:43.570278 | orchestrator | Thursday 08 January 2026 00:47:24 +0000 (0:00:01.208) 0:00:54.188 ****** 2026-01-08 00:57:43.570284 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.570290 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.570304 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.570310 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.570316 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.570322 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.570329 | orchestrator | 2026-01-08 00:57:43.570336 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-08 00:57:43.570342 | orchestrator | Thursday 08 January 2026 00:47:26 +0000 (0:00:01.280) 0:00:55.468 ****** 2026-01-08 00:57:43.570349 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.570355 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.570362 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.570368 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.570376 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.570382 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.570389 | orchestrator | 2026-01-08 00:57:43.570393 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-08 00:57:43.570397 | orchestrator | Thursday 08 January 2026 00:47:27 +0000 (0:00:01.169) 0:00:56.638 ****** 2026-01-08 00:57:43.570401 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.570404 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.570408 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.570412 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.570416 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.570419 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.570423 | orchestrator | 2026-01-08 00:57:43.570427 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-08 00:57:43.570431 | orchestrator | Thursday 08 January 2026 00:47:28 +0000 (0:00:01.018) 0:00:57.657 ****** 2026-01-08 00:57:43.570435 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.570438 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.570444 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.570450 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.570457 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.570462 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.570468 | orchestrator | 2026-01-08 00:57:43.570474 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-08 00:57:43.570480 | orchestrator | Thursday 08 January 2026 00:47:29 +0000 (0:00:00.828) 0:00:58.485 ****** 2026-01-08 00:57:43.570487 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.570493 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.570500 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.570506 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.570512 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.570550 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.570555 | orchestrator | 2026-01-08 00:57:43.570559 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-08 00:57:43.570564 | orchestrator | Thursday 08 January 2026 00:47:30 +0000 (0:00:01.376) 0:00:59.862 ****** 2026-01-08 00:57:43.570571 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.570578 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.570585 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.570592 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.570599 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.570606 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.570613 | orchestrator | 2026-01-08 00:57:43.570620 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-08 00:57:43.570626 | orchestrator | Thursday 08 January 2026 00:47:31 +0000 (0:00:00.598) 0:01:00.460 ****** 2026-01-08 00:57:43.570634 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.570641 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.570648 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.570653 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.570657 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.570661 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.570669 | orchestrator | 2026-01-08 00:57:43.570673 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-08 00:57:43.570677 | orchestrator | Thursday 08 January 2026 00:47:32 +0000 (0:00:01.101) 0:01:01.562 ****** 2026-01-08 00:57:43.570681 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.570685 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.570688 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.570692 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.570696 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.570700 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.570707 | orchestrator | 2026-01-08 00:57:43.570713 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-08 00:57:43.570719 | orchestrator | Thursday 08 January 2026 00:47:34 +0000 (0:00:01.916) 0:01:03.479 ****** 2026-01-08 00:57:43.570725 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.570731 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.570741 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.570748 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.570753 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.570759 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.570766 | orchestrator | 2026-01-08 00:57:43.570772 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-08 00:57:43.570778 | orchestrator | Thursday 08 January 2026 00:47:36 +0000 (0:00:02.724) 0:01:06.203 ****** 2026-01-08 00:57:43.570784 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.570791 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.570797 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.570803 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.570810 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.570815 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.570821 | orchestrator | 2026-01-08 00:57:43.570828 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-08 00:57:43.570834 | orchestrator | Thursday 08 January 2026 00:47:38 +0000 (0:00:01.072) 0:01:07.276 ****** 2026-01-08 00:57:43.570840 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.570846 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.570852 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.570859 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.570866 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.570873 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.570879 | orchestrator | 2026-01-08 00:57:43.570886 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-08 00:57:43.570892 | orchestrator | Thursday 08 January 2026 00:47:39 +0000 (0:00:01.611) 0:01:08.887 ****** 2026-01-08 00:57:43.570899 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.570905 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.570911 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.570917 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.570924 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.570930 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.570936 | orchestrator | 2026-01-08 00:57:43.570942 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-08 00:57:43.570949 | orchestrator | Thursday 08 January 2026 00:47:40 +0000 (0:00:01.170) 0:01:10.057 ****** 2026-01-08 00:57:43.570955 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.570961 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.570967 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.570973 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.570980 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.570986 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.570992 | orchestrator | 2026-01-08 00:57:43.570999 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-08 00:57:43.571003 | orchestrator | Thursday 08 January 2026 00:47:42 +0000 (0:00:01.746) 0:01:11.804 ****** 2026-01-08 00:57:43.571007 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.571015 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.571019 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.571023 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571027 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571033 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.571038 | orchestrator | 2026-01-08 00:57:43.571049 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-08 00:57:43.571055 | orchestrator | Thursday 08 January 2026 00:47:43 +0000 (0:00:00.903) 0:01:12.707 ****** 2026-01-08 00:57:43.571061 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571067 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571073 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571079 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571086 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571223 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.571234 | orchestrator | 2026-01-08 00:57:43.571239 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-08 00:57:43.571243 | orchestrator | Thursday 08 January 2026 00:47:44 +0000 (0:00:01.010) 0:01:13.718 ****** 2026-01-08 00:57:43.571247 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571251 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571254 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571258 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571296 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571301 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.571304 | orchestrator | 2026-01-08 00:57:43.571308 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-08 00:57:43.571312 | orchestrator | Thursday 08 January 2026 00:47:45 +0000 (0:00:01.125) 0:01:14.844 ****** 2026-01-08 00:57:43.571316 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571320 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571324 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571327 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.571331 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.571335 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.571339 | orchestrator | 2026-01-08 00:57:43.571343 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-08 00:57:43.571347 | orchestrator | Thursday 08 January 2026 00:47:47 +0000 (0:00:02.095) 0:01:16.940 ****** 2026-01-08 00:57:43.571350 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.571354 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.571358 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.571362 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.571365 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.571369 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.571373 | orchestrator | 2026-01-08 00:57:43.571377 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-08 00:57:43.571381 | orchestrator | Thursday 08 January 2026 00:47:49 +0000 (0:00:01.508) 0:01:18.448 ****** 2026-01-08 00:57:43.571384 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.571388 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.571392 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.571396 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.571399 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.571403 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.571407 | orchestrator | 2026-01-08 00:57:43.571411 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-08 00:57:43.571414 | orchestrator | Thursday 08 January 2026 00:47:50 +0000 (0:00:01.610) 0:01:20.058 ****** 2026-01-08 00:57:43.571418 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.571422 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.571433 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.571437 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.571441 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.571449 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.571453 | orchestrator | 2026-01-08 00:57:43.571457 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-08 00:57:43.571461 | orchestrator | Thursday 08 January 2026 00:47:52 +0000 (0:00:01.752) 0:01:21.810 ****** 2026-01-08 00:57:43.571464 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.571468 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.571472 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.571476 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.571479 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.571483 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.571487 | orchestrator | 2026-01-08 00:57:43.571491 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-08 00:57:43.571494 | orchestrator | Thursday 08 January 2026 00:47:55 +0000 (0:00:03.145) 0:01:24.955 ****** 2026-01-08 00:57:43.571499 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.571503 | orchestrator | 2026-01-08 00:57:43.571507 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-08 00:57:43.571510 | orchestrator | Thursday 08 January 2026 00:47:57 +0000 (0:00:01.358) 0:01:26.314 ****** 2026-01-08 00:57:43.571514 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571518 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571521 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571525 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571529 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571533 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.571536 | orchestrator | 2026-01-08 00:57:43.571540 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-08 00:57:43.571544 | orchestrator | Thursday 08 January 2026 00:47:57 +0000 (0:00:00.650) 0:01:26.965 ****** 2026-01-08 00:57:43.571548 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571551 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571555 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571559 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571570 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571582 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.571586 | orchestrator | 2026-01-08 00:57:43.571590 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-08 00:57:43.571593 | orchestrator | Thursday 08 January 2026 00:47:58 +0000 (0:00:01.103) 0:01:28.068 ****** 2026-01-08 00:57:43.571597 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-08 00:57:43.571601 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-08 00:57:43.571605 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-08 00:57:43.571609 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-08 00:57:43.571613 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-08 00:57:43.571616 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-08 00:57:43.571621 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-08 00:57:43.571624 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-08 00:57:43.571628 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-08 00:57:43.571632 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-08 00:57:43.571650 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-08 00:57:43.571654 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-08 00:57:43.571661 | orchestrator | 2026-01-08 00:57:43.571665 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-08 00:57:43.571669 | orchestrator | Thursday 08 January 2026 00:48:00 +0000 (0:00:01.434) 0:01:29.503 ****** 2026-01-08 00:57:43.571673 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.571677 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.571680 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.571684 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.571688 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.571692 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.571696 | orchestrator | 2026-01-08 00:57:43.571699 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-08 00:57:43.571703 | orchestrator | Thursday 08 January 2026 00:48:01 +0000 (0:00:01.392) 0:01:30.895 ****** 2026-01-08 00:57:43.571707 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571711 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571714 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571718 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571722 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571726 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.571729 | orchestrator | 2026-01-08 00:57:43.571733 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-08 00:57:43.571737 | orchestrator | Thursday 08 January 2026 00:48:02 +0000 (0:00:00.873) 0:01:31.769 ****** 2026-01-08 00:57:43.571741 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571744 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571748 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571752 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571756 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571759 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.571763 | orchestrator | 2026-01-08 00:57:43.571769 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-08 00:57:43.571772 | orchestrator | Thursday 08 January 2026 00:48:03 +0000 (0:00:00.873) 0:01:32.642 ****** 2026-01-08 00:57:43.571776 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571780 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571784 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571787 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571791 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571795 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.571799 | orchestrator | 2026-01-08 00:57:43.571803 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-08 00:57:43.571807 | orchestrator | Thursday 08 January 2026 00:48:03 +0000 (0:00:00.581) 0:01:33.224 ****** 2026-01-08 00:57:43.571811 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.571814 | orchestrator | 2026-01-08 00:57:43.571818 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-08 00:57:43.571822 | orchestrator | Thursday 08 January 2026 00:48:05 +0000 (0:00:01.389) 0:01:34.614 ****** 2026-01-08 00:57:43.571826 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.571830 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.571833 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.571837 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.571841 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.571844 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.571848 | orchestrator | 2026-01-08 00:57:43.571852 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-08 00:57:43.571856 | orchestrator | Thursday 08 January 2026 00:48:58 +0000 (0:00:52.944) 0:02:27.559 ****** 2026-01-08 00:57:43.571860 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-08 00:57:43.571866 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-08 00:57:43.571870 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-08 00:57:43.571874 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571878 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-08 00:57:43.571881 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-08 00:57:43.571885 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-08 00:57:43.571889 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571893 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-08 00:57:43.571897 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-08 00:57:43.571900 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-08 00:57:43.571904 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571908 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-08 00:57:43.571912 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-08 00:57:43.571915 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-08 00:57:43.571919 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571923 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-08 00:57:43.571927 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-08 00:57:43.571931 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-08 00:57:43.571934 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571950 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-08 00:57:43.571954 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-08 00:57:43.571958 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-08 00:57:43.571962 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.571966 | orchestrator | 2026-01-08 00:57:43.571969 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-08 00:57:43.571973 | orchestrator | Thursday 08 January 2026 00:48:58 +0000 (0:00:00.647) 0:02:28.207 ****** 2026-01-08 00:57:43.571977 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.571981 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.571985 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.571988 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.571992 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.571996 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572000 | orchestrator | 2026-01-08 00:57:43.572003 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-08 00:57:43.572007 | orchestrator | Thursday 08 January 2026 00:48:59 +0000 (0:00:00.785) 0:02:28.992 ****** 2026-01-08 00:57:43.572011 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572015 | orchestrator | 2026-01-08 00:57:43.572019 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-08 00:57:43.572022 | orchestrator | Thursday 08 January 2026 00:48:59 +0000 (0:00:00.151) 0:02:29.144 ****** 2026-01-08 00:57:43.572026 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572030 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572037 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572044 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572048 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572051 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572055 | orchestrator | 2026-01-08 00:57:43.572059 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-08 00:57:43.572066 | orchestrator | Thursday 08 January 2026 00:49:00 +0000 (0:00:00.692) 0:02:29.837 ****** 2026-01-08 00:57:43.572082 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572126 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572132 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572138 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572144 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572150 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572156 | orchestrator | 2026-01-08 00:57:43.572162 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-08 00:57:43.572169 | orchestrator | Thursday 08 January 2026 00:49:01 +0000 (0:00:00.925) 0:02:30.762 ****** 2026-01-08 00:57:43.572175 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572181 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572187 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572193 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572197 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572200 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572204 | orchestrator | 2026-01-08 00:57:43.572208 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-08 00:57:43.572212 | orchestrator | Thursday 08 January 2026 00:49:02 +0000 (0:00:00.660) 0:02:31.423 ****** 2026-01-08 00:57:43.572215 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.572219 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.572223 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.572227 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.572231 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.572234 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.572238 | orchestrator | 2026-01-08 00:57:43.572242 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-08 00:57:43.572246 | orchestrator | Thursday 08 January 2026 00:49:04 +0000 (0:00:02.707) 0:02:34.130 ****** 2026-01-08 00:57:43.572250 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.572253 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.572257 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.572261 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.572264 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.572268 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.572272 | orchestrator | 2026-01-08 00:57:43.572275 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-08 00:57:43.572279 | orchestrator | Thursday 08 January 2026 00:49:05 +0000 (0:00:00.717) 0:02:34.847 ****** 2026-01-08 00:57:43.572284 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.572288 | orchestrator | 2026-01-08 00:57:43.572292 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-08 00:57:43.572296 | orchestrator | Thursday 08 January 2026 00:49:06 +0000 (0:00:01.272) 0:02:36.119 ****** 2026-01-08 00:57:43.572300 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572304 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572307 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572311 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572315 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572318 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572322 | orchestrator | 2026-01-08 00:57:43.572326 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-08 00:57:43.572330 | orchestrator | Thursday 08 January 2026 00:49:07 +0000 (0:00:00.853) 0:02:36.973 ****** 2026-01-08 00:57:43.572333 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572337 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572341 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572345 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572348 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572352 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572359 | orchestrator | 2026-01-08 00:57:43.572363 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-08 00:57:43.572367 | orchestrator | Thursday 08 January 2026 00:49:08 +0000 (0:00:00.471) 0:02:37.444 ****** 2026-01-08 00:57:43.572371 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572375 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572396 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572400 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572404 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572408 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572412 | orchestrator | 2026-01-08 00:57:43.572416 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-08 00:57:43.572419 | orchestrator | Thursday 08 January 2026 00:49:08 +0000 (0:00:00.599) 0:02:38.044 ****** 2026-01-08 00:57:43.572423 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572427 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572431 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572434 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572438 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572442 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572446 | orchestrator | 2026-01-08 00:57:43.572450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-08 00:57:43.572453 | orchestrator | Thursday 08 January 2026 00:49:09 +0000 (0:00:00.484) 0:02:38.529 ****** 2026-01-08 00:57:43.572457 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572461 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572465 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572468 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572472 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572476 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572480 | orchestrator | 2026-01-08 00:57:43.572484 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-08 00:57:43.572487 | orchestrator | Thursday 08 January 2026 00:49:09 +0000 (0:00:00.683) 0:02:39.212 ****** 2026-01-08 00:57:43.572533 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572538 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572542 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572546 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572549 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572553 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572557 | orchestrator | 2026-01-08 00:57:43.572561 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-08 00:57:43.572568 | orchestrator | Thursday 08 January 2026 00:49:10 +0000 (0:00:00.612) 0:02:39.824 ****** 2026-01-08 00:57:43.572572 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572575 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572579 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572583 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572587 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572590 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572594 | orchestrator | 2026-01-08 00:57:43.572598 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-08 00:57:43.572602 | orchestrator | Thursday 08 January 2026 00:49:11 +0000 (0:00:00.778) 0:02:40.603 ****** 2026-01-08 00:57:43.572605 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.572609 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.572613 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.572617 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.572621 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.572624 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.572628 | orchestrator | 2026-01-08 00:57:43.572632 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-08 00:57:43.572639 | orchestrator | Thursday 08 January 2026 00:49:12 +0000 (0:00:00.733) 0:02:41.337 ****** 2026-01-08 00:57:43.572643 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.572647 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.572651 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.572655 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.572658 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.572662 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.572666 | orchestrator | 2026-01-08 00:57:43.572670 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-08 00:57:43.572674 | orchestrator | Thursday 08 January 2026 00:49:13 +0000 (0:00:01.103) 0:02:42.441 ****** 2026-01-08 00:57:43.572677 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.572681 | orchestrator | 2026-01-08 00:57:43.572685 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-08 00:57:43.572689 | orchestrator | Thursday 08 January 2026 00:49:14 +0000 (0:00:00.914) 0:02:43.355 ****** 2026-01-08 00:57:43.572693 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-08 00:57:43.572697 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-08 00:57:43.572701 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-08 00:57:43.572704 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-08 00:57:43.572708 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-08 00:57:43.572712 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-08 00:57:43.572716 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-08 00:57:43.572719 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-08 00:57:43.572723 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-08 00:57:43.572732 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-08 00:57:43.572736 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-08 00:57:43.572740 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-08 00:57:43.572743 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-08 00:57:43.572747 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-08 00:57:43.572751 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-08 00:57:43.572755 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-08 00:57:43.572759 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-08 00:57:43.572762 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-08 00:57:43.572781 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-08 00:57:43.572785 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-08 00:57:43.572789 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-08 00:57:43.572793 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-08 00:57:43.572797 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-08 00:57:43.572801 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-08 00:57:43.572804 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-08 00:57:43.572808 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-08 00:57:43.572812 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-08 00:57:43.572816 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-08 00:57:43.572819 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-08 00:57:43.572823 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-08 00:57:43.572827 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-08 00:57:43.572831 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-08 00:57:43.572837 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-08 00:57:43.572841 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-08 00:57:43.572845 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-08 00:57:43.572848 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-08 00:57:43.572852 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-08 00:57:43.572856 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-08 00:57:43.572860 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-08 00:57:43.572863 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-08 00:57:43.572869 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-08 00:57:43.572873 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-08 00:57:43.572877 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-08 00:57:43.572881 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-08 00:57:43.572885 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-08 00:57:43.572888 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-08 00:57:43.572892 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-08 00:57:43.572896 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-08 00:57:43.572900 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-08 00:57:43.572903 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-08 00:57:43.572907 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-08 00:57:43.572911 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-08 00:57:43.572915 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-08 00:57:43.572918 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-08 00:57:43.572922 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-08 00:57:43.572926 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-08 00:57:43.572930 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-08 00:57:43.572933 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-08 00:57:43.572937 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-08 00:57:43.572941 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-08 00:57:43.572945 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-08 00:57:43.572948 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-08 00:57:43.572952 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-08 00:57:43.572956 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-08 00:57:43.572960 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-08 00:57:43.572963 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-08 00:57:43.572967 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-08 00:57:43.572971 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-08 00:57:43.572975 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-08 00:57:43.572978 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-08 00:57:43.572982 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-08 00:57:43.572986 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-08 00:57:43.572990 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-08 00:57:43.572996 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-08 00:57:43.573000 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-08 00:57:43.573003 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-08 00:57:43.573019 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-08 00:57:43.573023 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-08 00:57:43.573027 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-08 00:57:43.573031 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-08 00:57:43.573035 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-08 00:57:43.573038 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-08 00:57:43.573042 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-08 00:57:43.573046 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-08 00:57:43.573050 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-08 00:57:43.573053 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-08 00:57:43.573057 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-08 00:57:43.573061 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-08 00:57:43.573065 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-08 00:57:43.573069 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-08 00:57:43.573072 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-08 00:57:43.573076 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-08 00:57:43.573080 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-08 00:57:43.573084 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-08 00:57:43.573098 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-08 00:57:43.573105 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-08 00:57:43.573112 | orchestrator | 2026-01-08 00:57:43.573119 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-08 00:57:43.573125 | orchestrator | Thursday 08 January 2026 00:49:20 +0000 (0:00:06.846) 0:02:50.201 ****** 2026-01-08 00:57:43.573129 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573133 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573136 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573141 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.573145 | orchestrator | 2026-01-08 00:57:43.573148 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-08 00:57:43.573152 | orchestrator | Thursday 08 January 2026 00:49:22 +0000 (0:00:01.101) 0:02:51.302 ****** 2026-01-08 00:57:43.573156 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.573161 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.573167 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.573174 | orchestrator | 2026-01-08 00:57:43.573180 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-08 00:57:43.573186 | orchestrator | Thursday 08 January 2026 00:49:23 +0000 (0:00:01.339) 0:02:52.642 ****** 2026-01-08 00:57:43.573191 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.573202 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.573207 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.573213 | orchestrator | 2026-01-08 00:57:43.573220 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-08 00:57:43.573226 | orchestrator | Thursday 08 January 2026 00:49:24 +0000 (0:00:01.288) 0:02:53.930 ****** 2026-01-08 00:57:43.573232 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.573239 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.573245 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.573251 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573258 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573262 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573266 | orchestrator | 2026-01-08 00:57:43.573269 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-08 00:57:43.573273 | orchestrator | Thursday 08 January 2026 00:49:25 +0000 (0:00:00.623) 0:02:54.554 ****** 2026-01-08 00:57:43.573277 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.573281 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.573285 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.573289 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573292 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573296 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573300 | orchestrator | 2026-01-08 00:57:43.573304 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-08 00:57:43.573308 | orchestrator | Thursday 08 January 2026 00:49:26 +0000 (0:00:00.831) 0:02:55.385 ****** 2026-01-08 00:57:43.573311 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573315 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573319 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573323 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573327 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573330 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573334 | orchestrator | 2026-01-08 00:57:43.573353 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-08 00:57:43.573358 | orchestrator | Thursday 08 January 2026 00:49:26 +0000 (0:00:00.619) 0:02:56.005 ****** 2026-01-08 00:57:43.573362 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573365 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573369 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573373 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573377 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573380 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573384 | orchestrator | 2026-01-08 00:57:43.573388 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-08 00:57:43.573392 | orchestrator | Thursday 08 January 2026 00:49:27 +0000 (0:00:00.851) 0:02:56.856 ****** 2026-01-08 00:57:43.573395 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573399 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573403 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573407 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573410 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573414 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573418 | orchestrator | 2026-01-08 00:57:43.573422 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-08 00:57:43.573426 | orchestrator | Thursday 08 January 2026 00:49:28 +0000 (0:00:00.658) 0:02:57.515 ****** 2026-01-08 00:57:43.573431 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573437 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573443 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573450 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573464 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573470 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573476 | orchestrator | 2026-01-08 00:57:43.573483 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-08 00:57:43.573489 | orchestrator | Thursday 08 January 2026 00:49:29 +0000 (0:00:00.735) 0:02:58.250 ****** 2026-01-08 00:57:43.573495 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573502 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573511 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573518 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573524 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573532 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573539 | orchestrator | 2026-01-08 00:57:43.573547 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-08 00:57:43.573554 | orchestrator | Thursday 08 January 2026 00:49:29 +0000 (0:00:00.715) 0:02:58.966 ****** 2026-01-08 00:57:43.573561 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573565 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573569 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573573 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573577 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573580 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573584 | orchestrator | 2026-01-08 00:57:43.573588 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-08 00:57:43.573592 | orchestrator | Thursday 08 January 2026 00:49:30 +0000 (0:00:00.856) 0:02:59.822 ****** 2026-01-08 00:57:43.573595 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573599 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573603 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573606 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.573610 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.573614 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.573618 | orchestrator | 2026-01-08 00:57:43.573621 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-08 00:57:43.573625 | orchestrator | Thursday 08 January 2026 00:49:33 +0000 (0:00:03.380) 0:03:03.203 ****** 2026-01-08 00:57:43.573629 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.573633 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.573637 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.573640 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573644 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573648 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573652 | orchestrator | 2026-01-08 00:57:43.573655 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-08 00:57:43.573659 | orchestrator | Thursday 08 January 2026 00:49:34 +0000 (0:00:00.778) 0:03:03.981 ****** 2026-01-08 00:57:43.573663 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.573667 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.573670 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.573674 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573678 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573682 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573685 | orchestrator | 2026-01-08 00:57:43.573689 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-08 00:57:43.573693 | orchestrator | Thursday 08 January 2026 00:49:35 +0000 (0:00:00.776) 0:03:04.758 ****** 2026-01-08 00:57:43.573697 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573700 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573704 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573708 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573712 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573715 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573722 | orchestrator | 2026-01-08 00:57:43.573726 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-08 00:57:43.573758 | orchestrator | Thursday 08 January 2026 00:49:36 +0000 (0:00:00.819) 0:03:05.578 ****** 2026-01-08 00:57:43.573762 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.573766 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.573770 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.573774 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573798 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573802 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573806 | orchestrator | 2026-01-08 00:57:43.573810 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-08 00:57:43.573814 | orchestrator | Thursday 08 January 2026 00:49:36 +0000 (0:00:00.574) 0:03:06.152 ****** 2026-01-08 00:57:43.573819 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-08 00:57:43.573825 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-08 00:57:43.573830 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-08 00:57:43.573837 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-08 00:57:43.573841 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573845 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-08 00:57:43.573849 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-08 00:57:43.573853 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573857 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573860 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573864 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573868 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573872 | orchestrator | 2026-01-08 00:57:43.573875 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-08 00:57:43.573879 | orchestrator | Thursday 08 January 2026 00:49:38 +0000 (0:00:01.109) 0:03:07.262 ****** 2026-01-08 00:57:43.573883 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573887 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573891 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573898 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573902 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573905 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573909 | orchestrator | 2026-01-08 00:57:43.573913 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-08 00:57:43.573917 | orchestrator | Thursday 08 January 2026 00:49:38 +0000 (0:00:00.542) 0:03:07.804 ****** 2026-01-08 00:57:43.573920 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573924 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573928 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573932 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573935 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573939 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573943 | orchestrator | 2026-01-08 00:57:43.573947 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-08 00:57:43.573951 | orchestrator | Thursday 08 January 2026 00:49:39 +0000 (0:00:00.636) 0:03:08.441 ****** 2026-01-08 00:57:43.573955 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573958 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573962 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.573966 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.573970 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.573973 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.573977 | orchestrator | 2026-01-08 00:57:43.573981 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-08 00:57:43.573985 | orchestrator | Thursday 08 January 2026 00:49:39 +0000 (0:00:00.553) 0:03:08.994 ****** 2026-01-08 00:57:43.573988 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.573992 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.573996 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.574000 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.574003 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.574007 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.574011 | orchestrator | 2026-01-08 00:57:43.574036 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-08 00:57:43.574053 | orchestrator | Thursday 08 January 2026 00:49:40 +0000 (0:00:00.964) 0:03:09.959 ****** 2026-01-08 00:57:43.574057 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.574061 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574065 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.574069 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.574072 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.574076 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.574080 | orchestrator | 2026-01-08 00:57:43.574084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-08 00:57:43.574098 | orchestrator | Thursday 08 January 2026 00:49:41 +0000 (0:00:00.817) 0:03:10.776 ****** 2026-01-08 00:57:43.574109 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.574113 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.574117 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.574121 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.574124 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.574128 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.574132 | orchestrator | 2026-01-08 00:57:43.574141 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-08 00:57:43.574145 | orchestrator | Thursday 08 January 2026 00:49:42 +0000 (0:00:01.085) 0:03:11.861 ****** 2026-01-08 00:57:43.574148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.574152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.574156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.574160 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574167 | orchestrator | 2026-01-08 00:57:43.574170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-08 00:57:43.574174 | orchestrator | Thursday 08 January 2026 00:49:43 +0000 (0:00:00.454) 0:03:12.315 ****** 2026-01-08 00:57:43.574178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.574182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.574188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.574191 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574195 | orchestrator | 2026-01-08 00:57:43.574199 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-08 00:57:43.574203 | orchestrator | Thursday 08 January 2026 00:49:43 +0000 (0:00:00.513) 0:03:12.829 ****** 2026-01-08 00:57:43.574207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.574210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.574214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.574218 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574222 | orchestrator | 2026-01-08 00:57:43.574226 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-08 00:57:43.574229 | orchestrator | Thursday 08 January 2026 00:49:43 +0000 (0:00:00.407) 0:03:13.237 ****** 2026-01-08 00:57:43.574233 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.574237 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.574241 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.574245 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.574248 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.574252 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.574256 | orchestrator | 2026-01-08 00:57:43.574260 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-08 00:57:43.574266 | orchestrator | Thursday 08 January 2026 00:49:44 +0000 (0:00:00.743) 0:03:13.981 ****** 2026-01-08 00:57:43.574272 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-08 00:57:43.574278 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-08 00:57:43.574284 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-08 00:57:43.574290 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-08 00:57:43.574296 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.574302 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-08 00:57:43.574309 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.574315 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-08 00:57:43.574321 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.574328 | orchestrator | 2026-01-08 00:57:43.574334 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-08 00:57:43.574342 | orchestrator | Thursday 08 January 2026 00:49:48 +0000 (0:00:03.364) 0:03:17.346 ****** 2026-01-08 00:57:43.574348 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.574355 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.574362 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.574369 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.574375 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.574383 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.574390 | orchestrator | 2026-01-08 00:57:43.574397 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-08 00:57:43.574404 | orchestrator | Thursday 08 January 2026 00:49:51 +0000 (0:00:03.883) 0:03:21.230 ****** 2026-01-08 00:57:43.574411 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.574417 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.574424 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.574431 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.574437 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.574441 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.574445 | orchestrator | 2026-01-08 00:57:43.574452 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-08 00:57:43.574456 | orchestrator | Thursday 08 January 2026 00:49:52 +0000 (0:00:01.006) 0:03:22.236 ****** 2026-01-08 00:57:43.574460 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574463 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.574467 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.574471 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-01-08 00:57:43.574475 | orchestrator | 2026-01-08 00:57:43.574479 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-08 00:57:43.574506 | orchestrator | Thursday 08 January 2026 00:49:54 +0000 (0:00:01.077) 0:03:23.314 ****** 2026-01-08 00:57:43.574513 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.574519 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.574524 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.574531 | orchestrator | 2026-01-08 00:57:43.574536 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-08 00:57:43.574542 | orchestrator | Thursday 08 January 2026 00:49:54 +0000 (0:00:00.403) 0:03:23.718 ****** 2026-01-08 00:57:43.574548 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.574553 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.574559 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.574565 | orchestrator | 2026-01-08 00:57:43.574570 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-08 00:57:43.574576 | orchestrator | Thursday 08 January 2026 00:49:55 +0000 (0:00:01.212) 0:03:24.930 ****** 2026-01-08 00:57:43.574583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-08 00:57:43.574589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-08 00:57:43.574593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-08 00:57:43.574596 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.574600 | orchestrator | 2026-01-08 00:57:43.574604 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-08 00:57:43.574608 | orchestrator | Thursday 08 January 2026 00:49:56 +0000 (0:00:00.612) 0:03:25.543 ****** 2026-01-08 00:57:43.574612 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.574615 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.574620 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.574627 | orchestrator | 2026-01-08 00:57:43.574632 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-08 00:57:43.574636 | orchestrator | Thursday 08 January 2026 00:49:56 +0000 (0:00:00.432) 0:03:25.975 ****** 2026-01-08 00:57:43.574639 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.574643 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.574647 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.574654 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.574658 | orchestrator | 2026-01-08 00:57:43.574661 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-08 00:57:43.574665 | orchestrator | Thursday 08 January 2026 00:49:57 +0000 (0:00:01.141) 0:03:27.117 ****** 2026-01-08 00:57:43.574669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.574673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.574677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.574681 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574688 | orchestrator | 2026-01-08 00:57:43.574692 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-08 00:57:43.574696 | orchestrator | Thursday 08 January 2026 00:49:58 +0000 (0:00:00.399) 0:03:27.517 ****** 2026-01-08 00:57:43.574700 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574703 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.574707 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.574714 | orchestrator | 2026-01-08 00:57:43.574718 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-08 00:57:43.574722 | orchestrator | Thursday 08 January 2026 00:49:58 +0000 (0:00:00.358) 0:03:27.875 ****** 2026-01-08 00:57:43.574726 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574730 | orchestrator | 2026-01-08 00:57:43.574733 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-08 00:57:43.574737 | orchestrator | Thursday 08 January 2026 00:49:58 +0000 (0:00:00.222) 0:03:28.098 ****** 2026-01-08 00:57:43.574741 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574745 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.574748 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.574752 | orchestrator | 2026-01-08 00:57:43.574756 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-08 00:57:43.574760 | orchestrator | Thursday 08 January 2026 00:49:59 +0000 (0:00:00.321) 0:03:28.420 ****** 2026-01-08 00:57:43.574764 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574767 | orchestrator | 2026-01-08 00:57:43.574771 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-08 00:57:43.574775 | orchestrator | Thursday 08 January 2026 00:49:59 +0000 (0:00:00.203) 0:03:28.623 ****** 2026-01-08 00:57:43.574779 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574782 | orchestrator | 2026-01-08 00:57:43.574786 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-08 00:57:43.574790 | orchestrator | Thursday 08 January 2026 00:49:59 +0000 (0:00:00.232) 0:03:28.855 ****** 2026-01-08 00:57:43.574802 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574806 | orchestrator | 2026-01-08 00:57:43.574810 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-08 00:57:43.574815 | orchestrator | Thursday 08 January 2026 00:49:59 +0000 (0:00:00.114) 0:03:28.970 ****** 2026-01-08 00:57:43.574822 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574827 | orchestrator | 2026-01-08 00:57:43.574833 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-08 00:57:43.574839 | orchestrator | Thursday 08 January 2026 00:50:00 +0000 (0:00:00.809) 0:03:29.779 ****** 2026-01-08 00:57:43.574845 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574852 | orchestrator | 2026-01-08 00:57:43.574858 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-08 00:57:43.574864 | orchestrator | Thursday 08 January 2026 00:50:00 +0000 (0:00:00.258) 0:03:30.038 ****** 2026-01-08 00:57:43.574870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.574877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.574883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.574890 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574896 | orchestrator | 2026-01-08 00:57:43.574902 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-08 00:57:43.574929 | orchestrator | Thursday 08 January 2026 00:50:01 +0000 (0:00:00.404) 0:03:30.442 ****** 2026-01-08 00:57:43.574934 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574938 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.574941 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.574945 | orchestrator | 2026-01-08 00:57:43.574949 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-08 00:57:43.574953 | orchestrator | Thursday 08 January 2026 00:50:01 +0000 (0:00:00.359) 0:03:30.802 ****** 2026-01-08 00:57:43.574957 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574960 | orchestrator | 2026-01-08 00:57:43.574964 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-08 00:57:43.574968 | orchestrator | Thursday 08 January 2026 00:50:01 +0000 (0:00:00.223) 0:03:31.026 ****** 2026-01-08 00:57:43.574972 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.574976 | orchestrator | 2026-01-08 00:57:43.574983 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-08 00:57:43.574987 | orchestrator | Thursday 08 January 2026 00:50:02 +0000 (0:00:00.232) 0:03:31.258 ****** 2026-01-08 00:57:43.574991 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.574994 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.574998 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575002 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.575006 | orchestrator | 2026-01-08 00:57:43.575010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-08 00:57:43.575013 | orchestrator | Thursday 08 January 2026 00:50:03 +0000 (0:00:01.120) 0:03:32.379 ****** 2026-01-08 00:57:43.575017 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.575021 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.575025 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.575028 | orchestrator | 2026-01-08 00:57:43.575032 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-08 00:57:43.575036 | orchestrator | Thursday 08 January 2026 00:50:03 +0000 (0:00:00.346) 0:03:32.726 ****** 2026-01-08 00:57:43.575042 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.575046 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.575050 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.575054 | orchestrator | 2026-01-08 00:57:43.575058 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-08 00:57:43.575061 | orchestrator | Thursday 08 January 2026 00:50:04 +0000 (0:00:01.276) 0:03:34.002 ****** 2026-01-08 00:57:43.575065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.575069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.575075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.575082 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.575105 | orchestrator | 2026-01-08 00:57:43.575112 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-08 00:57:43.575119 | orchestrator | Thursday 08 January 2026 00:50:05 +0000 (0:00:01.030) 0:03:35.032 ****** 2026-01-08 00:57:43.575125 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.575131 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.575137 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.575143 | orchestrator | 2026-01-08 00:57:43.575149 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-08 00:57:43.575155 | orchestrator | Thursday 08 January 2026 00:50:06 +0000 (0:00:00.568) 0:03:35.600 ****** 2026-01-08 00:57:43.575161 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575168 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575174 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575180 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.575187 | orchestrator | 2026-01-08 00:57:43.575191 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-08 00:57:43.575195 | orchestrator | Thursday 08 January 2026 00:50:07 +0000 (0:00:00.881) 0:03:36.482 ****** 2026-01-08 00:57:43.575198 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.575202 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.575206 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.575210 | orchestrator | 2026-01-08 00:57:43.575214 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-08 00:57:43.575217 | orchestrator | Thursday 08 January 2026 00:50:07 +0000 (0:00:00.676) 0:03:37.158 ****** 2026-01-08 00:57:43.575221 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.575225 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.575229 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.575233 | orchestrator | 2026-01-08 00:57:43.575236 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-08 00:57:43.575245 | orchestrator | Thursday 08 January 2026 00:50:09 +0000 (0:00:01.183) 0:03:38.342 ****** 2026-01-08 00:57:43.575248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.575252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.575256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.575260 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.575264 | orchestrator | 2026-01-08 00:57:43.575267 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-08 00:57:43.575271 | orchestrator | Thursday 08 January 2026 00:50:09 +0000 (0:00:00.852) 0:03:39.195 ****** 2026-01-08 00:57:43.575275 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.575279 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.575282 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.575286 | orchestrator | 2026-01-08 00:57:43.575290 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-08 00:57:43.575294 | orchestrator | Thursday 08 January 2026 00:50:10 +0000 (0:00:00.368) 0:03:39.563 ****** 2026-01-08 00:57:43.575298 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.575301 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.575305 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.575309 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575313 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575333 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575338 | orchestrator | 2026-01-08 00:57:43.575341 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-08 00:57:43.575345 | orchestrator | Thursday 08 January 2026 00:50:11 +0000 (0:00:00.949) 0:03:40.513 ****** 2026-01-08 00:57:43.575349 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.575353 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.575357 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.575360 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.575364 | orchestrator | 2026-01-08 00:57:43.575368 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-08 00:57:43.575372 | orchestrator | Thursday 08 January 2026 00:50:12 +0000 (0:00:00.967) 0:03:41.481 ****** 2026-01-08 00:57:43.575376 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.575379 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.575383 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.575387 | orchestrator | 2026-01-08 00:57:43.575391 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-08 00:57:43.575394 | orchestrator | Thursday 08 January 2026 00:50:12 +0000 (0:00:00.613) 0:03:42.094 ****** 2026-01-08 00:57:43.575398 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.575402 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.575406 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.575409 | orchestrator | 2026-01-08 00:57:43.575413 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-08 00:57:43.575417 | orchestrator | Thursday 08 January 2026 00:50:14 +0000 (0:00:01.152) 0:03:43.246 ****** 2026-01-08 00:57:43.575421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-08 00:57:43.575424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-08 00:57:43.575428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-08 00:57:43.575432 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575436 | orchestrator | 2026-01-08 00:57:43.575442 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-08 00:57:43.575446 | orchestrator | Thursday 08 January 2026 00:50:14 +0000 (0:00:00.640) 0:03:43.886 ****** 2026-01-08 00:57:43.575450 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.575453 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.575457 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.575461 | orchestrator | 2026-01-08 00:57:43.575467 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-08 00:57:43.575471 | orchestrator | 2026-01-08 00:57:43.575475 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-08 00:57:43.575479 | orchestrator | Thursday 08 January 2026 00:50:15 +0000 (0:00:00.536) 0:03:44.423 ****** 2026-01-08 00:57:43.575483 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.575487 | orchestrator | 2026-01-08 00:57:43.575491 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-08 00:57:43.575495 | orchestrator | Thursday 08 January 2026 00:50:16 +0000 (0:00:00.818) 0:03:45.242 ****** 2026-01-08 00:57:43.575499 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.575502 | orchestrator | 2026-01-08 00:57:43.575506 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-08 00:57:43.575510 | orchestrator | Thursday 08 January 2026 00:50:16 +0000 (0:00:00.532) 0:03:45.774 ****** 2026-01-08 00:57:43.575514 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.575517 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.575521 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.575525 | orchestrator | 2026-01-08 00:57:43.575529 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-08 00:57:43.575533 | orchestrator | Thursday 08 January 2026 00:50:17 +0000 (0:00:00.910) 0:03:46.685 ****** 2026-01-08 00:57:43.575536 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575540 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575544 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575548 | orchestrator | 2026-01-08 00:57:43.575551 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-08 00:57:43.575555 | orchestrator | Thursday 08 January 2026 00:50:17 +0000 (0:00:00.309) 0:03:46.994 ****** 2026-01-08 00:57:43.575559 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575563 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575566 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575570 | orchestrator | 2026-01-08 00:57:43.575574 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-08 00:57:43.575578 | orchestrator | Thursday 08 January 2026 00:50:18 +0000 (0:00:00.304) 0:03:47.299 ****** 2026-01-08 00:57:43.575582 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575585 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575589 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575595 | orchestrator | 2026-01-08 00:57:43.575601 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-08 00:57:43.575607 | orchestrator | Thursday 08 January 2026 00:50:18 +0000 (0:00:00.335) 0:03:47.635 ****** 2026-01-08 00:57:43.575613 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.575619 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.575626 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.575633 | orchestrator | 2026-01-08 00:57:43.575639 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-08 00:57:43.575646 | orchestrator | Thursday 08 January 2026 00:50:19 +0000 (0:00:00.946) 0:03:48.581 ****** 2026-01-08 00:57:43.575652 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575658 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575664 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575671 | orchestrator | 2026-01-08 00:57:43.575678 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-08 00:57:43.575685 | orchestrator | Thursday 08 January 2026 00:50:19 +0000 (0:00:00.364) 0:03:48.946 ****** 2026-01-08 00:57:43.575713 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575718 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575722 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575726 | orchestrator | 2026-01-08 00:57:43.575729 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-08 00:57:43.575741 | orchestrator | Thursday 08 January 2026 00:50:20 +0000 (0:00:00.344) 0:03:49.290 ****** 2026-01-08 00:57:43.575744 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.575748 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.575752 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.575756 | orchestrator | 2026-01-08 00:57:43.575760 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-08 00:57:43.575763 | orchestrator | Thursday 08 January 2026 00:50:20 +0000 (0:00:00.782) 0:03:50.072 ****** 2026-01-08 00:57:43.575769 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.575775 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.575782 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.575787 | orchestrator | 2026-01-08 00:57:43.575794 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-08 00:57:43.575800 | orchestrator | Thursday 08 January 2026 00:50:21 +0000 (0:00:00.981) 0:03:51.054 ****** 2026-01-08 00:57:43.575808 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575815 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575821 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575828 | orchestrator | 2026-01-08 00:57:43.575835 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-08 00:57:43.575842 | orchestrator | Thursday 08 January 2026 00:50:22 +0000 (0:00:00.352) 0:03:51.406 ****** 2026-01-08 00:57:43.575848 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.575853 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.575856 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.575860 | orchestrator | 2026-01-08 00:57:43.575864 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-08 00:57:43.575868 | orchestrator | Thursday 08 January 2026 00:50:22 +0000 (0:00:00.369) 0:03:51.776 ****** 2026-01-08 00:57:43.575871 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575875 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575882 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575886 | orchestrator | 2026-01-08 00:57:43.575890 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-08 00:57:43.575894 | orchestrator | Thursday 08 January 2026 00:50:22 +0000 (0:00:00.311) 0:03:52.088 ****** 2026-01-08 00:57:43.575898 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575904 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575910 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575916 | orchestrator | 2026-01-08 00:57:43.575923 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-08 00:57:43.575929 | orchestrator | Thursday 08 January 2026 00:50:23 +0000 (0:00:00.300) 0:03:52.388 ****** 2026-01-08 00:57:43.575935 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575941 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575946 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575952 | orchestrator | 2026-01-08 00:57:43.575958 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-08 00:57:43.575964 | orchestrator | Thursday 08 January 2026 00:50:23 +0000 (0:00:00.594) 0:03:52.983 ****** 2026-01-08 00:57:43.575971 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.575977 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.575983 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.575989 | orchestrator | 2026-01-08 00:57:43.575996 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-08 00:57:43.576003 | orchestrator | Thursday 08 January 2026 00:50:24 +0000 (0:00:00.316) 0:03:53.300 ****** 2026-01-08 00:57:43.576009 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.576016 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.576022 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.576029 | orchestrator | 2026-01-08 00:57:43.576036 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-08 00:57:43.576048 | orchestrator | Thursday 08 January 2026 00:50:24 +0000 (0:00:00.302) 0:03:53.603 ****** 2026-01-08 00:57:43.576052 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576056 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576060 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576063 | orchestrator | 2026-01-08 00:57:43.576067 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-08 00:57:43.576071 | orchestrator | Thursday 08 January 2026 00:50:24 +0000 (0:00:00.345) 0:03:53.949 ****** 2026-01-08 00:57:43.576075 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576079 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576082 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576086 | orchestrator | 2026-01-08 00:57:43.576107 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-08 00:57:43.576111 | orchestrator | Thursday 08 January 2026 00:50:25 +0000 (0:00:00.654) 0:03:54.603 ****** 2026-01-08 00:57:43.576115 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576119 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576123 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576126 | orchestrator | 2026-01-08 00:57:43.576130 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-08 00:57:43.576134 | orchestrator | Thursday 08 January 2026 00:50:25 +0000 (0:00:00.585) 0:03:55.189 ****** 2026-01-08 00:57:43.576138 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576142 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576146 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576149 | orchestrator | 2026-01-08 00:57:43.576153 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-08 00:57:43.576157 | orchestrator | Thursday 08 January 2026 00:50:26 +0000 (0:00:00.349) 0:03:55.539 ****** 2026-01-08 00:57:43.576161 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.576165 | orchestrator | 2026-01-08 00:57:43.576169 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-08 00:57:43.576173 | orchestrator | Thursday 08 January 2026 00:50:27 +0000 (0:00:00.929) 0:03:56.468 ****** 2026-01-08 00:57:43.576176 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.576180 | orchestrator | 2026-01-08 00:57:43.576208 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-08 00:57:43.576215 | orchestrator | Thursday 08 January 2026 00:50:27 +0000 (0:00:00.152) 0:03:56.621 ****** 2026-01-08 00:57:43.576221 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-08 00:57:43.576227 | orchestrator | 2026-01-08 00:57:43.576233 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-08 00:57:43.576239 | orchestrator | Thursday 08 January 2026 00:50:28 +0000 (0:00:01.042) 0:03:57.663 ****** 2026-01-08 00:57:43.576245 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576251 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576257 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576263 | orchestrator | 2026-01-08 00:57:43.576269 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-08 00:57:43.576275 | orchestrator | Thursday 08 January 2026 00:50:28 +0000 (0:00:00.400) 0:03:58.064 ****** 2026-01-08 00:57:43.576282 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576288 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576294 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576300 | orchestrator | 2026-01-08 00:57:43.576304 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-08 00:57:43.576308 | orchestrator | Thursday 08 January 2026 00:50:29 +0000 (0:00:00.582) 0:03:58.647 ****** 2026-01-08 00:57:43.576311 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576315 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576319 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576323 | orchestrator | 2026-01-08 00:57:43.576327 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-08 00:57:43.576334 | orchestrator | Thursday 08 January 2026 00:50:30 +0000 (0:00:01.354) 0:04:00.002 ****** 2026-01-08 00:57:43.576338 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576342 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576346 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576349 | orchestrator | 2026-01-08 00:57:43.576353 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-08 00:57:43.576357 | orchestrator | Thursday 08 January 2026 00:50:31 +0000 (0:00:00.888) 0:04:00.890 ****** 2026-01-08 00:57:43.576364 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576367 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576371 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576375 | orchestrator | 2026-01-08 00:57:43.576379 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-08 00:57:43.576383 | orchestrator | Thursday 08 January 2026 00:50:32 +0000 (0:00:00.699) 0:04:01.589 ****** 2026-01-08 00:57:43.576386 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576390 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576394 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576398 | orchestrator | 2026-01-08 00:57:43.576402 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-08 00:57:43.576405 | orchestrator | Thursday 08 January 2026 00:50:33 +0000 (0:00:00.728) 0:04:02.318 ****** 2026-01-08 00:57:43.576409 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576413 | orchestrator | 2026-01-08 00:57:43.576417 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-08 00:57:43.576420 | orchestrator | Thursday 08 January 2026 00:50:34 +0000 (0:00:01.910) 0:04:04.229 ****** 2026-01-08 00:57:43.576424 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576429 | orchestrator | 2026-01-08 00:57:43.576435 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-08 00:57:43.576441 | orchestrator | Thursday 08 January 2026 00:50:35 +0000 (0:00:00.862) 0:04:05.092 ****** 2026-01-08 00:57:43.576446 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.576453 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-08 00:57:43.576459 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.576464 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-08 00:57:43.576470 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-08 00:57:43.576477 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-08 00:57:43.576482 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-08 00:57:43.576488 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-01-08 00:57:43.576494 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-08 00:57:43.576500 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-01-08 00:57:43.576507 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-08 00:57:43.576513 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-08 00:57:43.576519 | orchestrator | 2026-01-08 00:57:43.576525 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-08 00:57:43.576532 | orchestrator | Thursday 08 January 2026 00:50:39 +0000 (0:00:03.272) 0:04:08.365 ****** 2026-01-08 00:57:43.576536 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576540 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576544 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576548 | orchestrator | 2026-01-08 00:57:43.576551 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-08 00:57:43.576555 | orchestrator | Thursday 08 January 2026 00:50:40 +0000 (0:00:01.191) 0:04:09.557 ****** 2026-01-08 00:57:43.576559 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576563 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576567 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576570 | orchestrator | 2026-01-08 00:57:43.576578 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-08 00:57:43.576582 | orchestrator | Thursday 08 January 2026 00:50:40 +0000 (0:00:00.325) 0:04:09.882 ****** 2026-01-08 00:57:43.576586 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576589 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576593 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576597 | orchestrator | 2026-01-08 00:57:43.576601 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-08 00:57:43.576605 | orchestrator | Thursday 08 January 2026 00:50:41 +0000 (0:00:00.749) 0:04:10.631 ****** 2026-01-08 00:57:43.576608 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576632 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576637 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576641 | orchestrator | 2026-01-08 00:57:43.576644 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-08 00:57:43.576648 | orchestrator | Thursday 08 January 2026 00:50:43 +0000 (0:00:01.851) 0:04:12.483 ****** 2026-01-08 00:57:43.576652 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576656 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576660 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576664 | orchestrator | 2026-01-08 00:57:43.576668 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-08 00:57:43.576671 | orchestrator | Thursday 08 January 2026 00:50:44 +0000 (0:00:01.175) 0:04:13.658 ****** 2026-01-08 00:57:43.576675 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.576679 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.576683 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.576687 | orchestrator | 2026-01-08 00:57:43.576691 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-08 00:57:43.576694 | orchestrator | Thursday 08 January 2026 00:50:44 +0000 (0:00:00.468) 0:04:14.126 ****** 2026-01-08 00:57:43.576698 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.576702 | orchestrator | 2026-01-08 00:57:43.576706 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-08 00:57:43.576710 | orchestrator | Thursday 08 January 2026 00:50:45 +0000 (0:00:00.756) 0:04:14.882 ****** 2026-01-08 00:57:43.576714 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.576718 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.576721 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.576725 | orchestrator | 2026-01-08 00:57:43.576729 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-08 00:57:43.576733 | orchestrator | Thursday 08 January 2026 00:50:45 +0000 (0:00:00.349) 0:04:15.232 ****** 2026-01-08 00:57:43.576737 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.576741 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.576745 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.576749 | orchestrator | 2026-01-08 00:57:43.576752 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-08 00:57:43.576756 | orchestrator | Thursday 08 January 2026 00:50:46 +0000 (0:00:00.309) 0:04:15.542 ****** 2026-01-08 00:57:43.576760 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.576765 | orchestrator | 2026-01-08 00:57:43.576768 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-08 00:57:43.576772 | orchestrator | Thursday 08 January 2026 00:50:47 +0000 (0:00:00.717) 0:04:16.260 ****** 2026-01-08 00:57:43.576776 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576780 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576784 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576788 | orchestrator | 2026-01-08 00:57:43.576791 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-08 00:57:43.576795 | orchestrator | Thursday 08 January 2026 00:50:48 +0000 (0:00:01.622) 0:04:17.882 ****** 2026-01-08 00:57:43.576805 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576809 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576813 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576817 | orchestrator | 2026-01-08 00:57:43.576821 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-08 00:57:43.576825 | orchestrator | Thursday 08 January 2026 00:50:49 +0000 (0:00:00.997) 0:04:18.879 ****** 2026-01-08 00:57:43.576881 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576893 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576897 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576900 | orchestrator | 2026-01-08 00:57:43.576904 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-08 00:57:43.576908 | orchestrator | Thursday 08 January 2026 00:50:51 +0000 (0:00:01.773) 0:04:20.653 ****** 2026-01-08 00:57:43.576912 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.576916 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.576920 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.576923 | orchestrator | 2026-01-08 00:57:43.576927 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-08 00:57:43.576931 | orchestrator | Thursday 08 January 2026 00:50:53 +0000 (0:00:02.419) 0:04:23.073 ****** 2026-01-08 00:57:43.576935 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-01-08 00:57:43.576939 | orchestrator | 2026-01-08 00:57:43.576943 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-08 00:57:43.576946 | orchestrator | Thursday 08 January 2026 00:50:54 +0000 (0:00:00.731) 0:04:23.805 ****** 2026-01-08 00:57:43.576950 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-08 00:57:43.576954 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576958 | orchestrator | 2026-01-08 00:57:43.576962 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-08 00:57:43.576966 | orchestrator | Thursday 08 January 2026 00:51:16 +0000 (0:00:21.677) 0:04:45.483 ****** 2026-01-08 00:57:43.576969 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.576973 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.576977 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.576981 | orchestrator | 2026-01-08 00:57:43.576985 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-08 00:57:43.576988 | orchestrator | Thursday 08 January 2026 00:51:24 +0000 (0:00:07.998) 0:04:53.481 ****** 2026-01-08 00:57:43.576992 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.576996 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577000 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577003 | orchestrator | 2026-01-08 00:57:43.577007 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-08 00:57:43.577025 | orchestrator | Thursday 08 January 2026 00:51:24 +0000 (0:00:00.605) 0:04:54.086 ****** 2026-01-08 00:57:43.577031 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__67ad8b647b03f3f60af45d62cbc0578ef8a13067'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-08 00:57:43.577036 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__67ad8b647b03f3f60af45d62cbc0578ef8a13067'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-08 00:57:43.577041 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__67ad8b647b03f3f60af45d62cbc0578ef8a13067'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-08 00:57:43.577050 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__67ad8b647b03f3f60af45d62cbc0578ef8a13067'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-08 00:57:43.577054 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__67ad8b647b03f3f60af45d62cbc0578ef8a13067'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-08 00:57:43.577059 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__67ad8b647b03f3f60af45d62cbc0578ef8a13067'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__67ad8b647b03f3f60af45d62cbc0578ef8a13067'}])  2026-01-08 00:57:43.577063 | orchestrator | 2026-01-08 00:57:43.577067 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-08 00:57:43.577071 | orchestrator | Thursday 08 January 2026 00:51:40 +0000 (0:00:15.338) 0:05:09.425 ****** 2026-01-08 00:57:43.577075 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577078 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577082 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577086 | orchestrator | 2026-01-08 00:57:43.577104 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-08 00:57:43.577111 | orchestrator | Thursday 08 January 2026 00:51:40 +0000 (0:00:00.387) 0:05:09.813 ****** 2026-01-08 00:57:43.577117 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.577122 | orchestrator | 2026-01-08 00:57:43.577127 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-08 00:57:43.577134 | orchestrator | Thursday 08 January 2026 00:51:41 +0000 (0:00:00.830) 0:05:10.643 ****** 2026-01-08 00:57:43.577138 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577142 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577145 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577149 | orchestrator | 2026-01-08 00:57:43.577153 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-08 00:57:43.577157 | orchestrator | Thursday 08 January 2026 00:51:41 +0000 (0:00:00.370) 0:05:11.014 ****** 2026-01-08 00:57:43.577163 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577169 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577175 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577180 | orchestrator | 2026-01-08 00:57:43.577186 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-08 00:57:43.577193 | orchestrator | Thursday 08 January 2026 00:51:42 +0000 (0:00:00.418) 0:05:11.433 ****** 2026-01-08 00:57:43.577199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-08 00:57:43.577205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-08 00:57:43.577211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-08 00:57:43.577218 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577225 | orchestrator | 2026-01-08 00:57:43.577231 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-08 00:57:43.577237 | orchestrator | Thursday 08 January 2026 00:51:43 +0000 (0:00:00.864) 0:05:12.297 ****** 2026-01-08 00:57:43.577247 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577252 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577276 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577287 | orchestrator | 2026-01-08 00:57:43.577293 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-08 00:57:43.577299 | orchestrator | 2026-01-08 00:57:43.577305 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-08 00:57:43.577310 | orchestrator | Thursday 08 January 2026 00:51:43 +0000 (0:00:00.878) 0:05:13.175 ****** 2026-01-08 00:57:43.577316 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.577323 | orchestrator | 2026-01-08 00:57:43.577329 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-08 00:57:43.577336 | orchestrator | Thursday 08 January 2026 00:51:44 +0000 (0:00:00.833) 0:05:14.008 ****** 2026-01-08 00:57:43.577342 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.577348 | orchestrator | 2026-01-08 00:57:43.577355 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-08 00:57:43.577361 | orchestrator | Thursday 08 January 2026 00:51:45 +0000 (0:00:00.833) 0:05:14.842 ****** 2026-01-08 00:57:43.577367 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577373 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577379 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577385 | orchestrator | 2026-01-08 00:57:43.577389 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-08 00:57:43.577393 | orchestrator | Thursday 08 January 2026 00:51:46 +0000 (0:00:00.749) 0:05:15.592 ****** 2026-01-08 00:57:43.577397 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577401 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577405 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577408 | orchestrator | 2026-01-08 00:57:43.577412 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-08 00:57:43.577420 | orchestrator | Thursday 08 January 2026 00:51:46 +0000 (0:00:00.291) 0:05:15.884 ****** 2026-01-08 00:57:43.577424 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577427 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577431 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577435 | orchestrator | 2026-01-08 00:57:43.577439 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-08 00:57:43.577443 | orchestrator | Thursday 08 January 2026 00:51:47 +0000 (0:00:00.606) 0:05:16.490 ****** 2026-01-08 00:57:43.577446 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577450 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577454 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577457 | orchestrator | 2026-01-08 00:57:43.577461 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-08 00:57:43.577465 | orchestrator | Thursday 08 January 2026 00:51:47 +0000 (0:00:00.337) 0:05:16.827 ****** 2026-01-08 00:57:43.577469 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577473 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577476 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577480 | orchestrator | 2026-01-08 00:57:43.577484 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-08 00:57:43.577488 | orchestrator | Thursday 08 January 2026 00:51:48 +0000 (0:00:00.743) 0:05:17.571 ****** 2026-01-08 00:57:43.577491 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577495 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577499 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577503 | orchestrator | 2026-01-08 00:57:43.577506 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-08 00:57:43.577510 | orchestrator | Thursday 08 January 2026 00:51:48 +0000 (0:00:00.342) 0:05:17.913 ****** 2026-01-08 00:57:43.577518 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577521 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577525 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577529 | orchestrator | 2026-01-08 00:57:43.577533 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-08 00:57:43.577539 | orchestrator | Thursday 08 January 2026 00:51:49 +0000 (0:00:00.646) 0:05:18.559 ****** 2026-01-08 00:57:43.577546 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577555 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577561 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577567 | orchestrator | 2026-01-08 00:57:43.577573 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-08 00:57:43.577579 | orchestrator | Thursday 08 January 2026 00:51:50 +0000 (0:00:00.811) 0:05:19.371 ****** 2026-01-08 00:57:43.577584 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577590 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577595 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577602 | orchestrator | 2026-01-08 00:57:43.577607 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-08 00:57:43.577613 | orchestrator | Thursday 08 January 2026 00:51:50 +0000 (0:00:00.805) 0:05:20.176 ****** 2026-01-08 00:57:43.577619 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577625 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577631 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577638 | orchestrator | 2026-01-08 00:57:43.577644 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-08 00:57:43.577650 | orchestrator | Thursday 08 January 2026 00:51:51 +0000 (0:00:00.353) 0:05:20.530 ****** 2026-01-08 00:57:43.577656 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577663 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577669 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577675 | orchestrator | 2026-01-08 00:57:43.577681 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-08 00:57:43.577687 | orchestrator | Thursday 08 January 2026 00:51:51 +0000 (0:00:00.329) 0:05:20.860 ****** 2026-01-08 00:57:43.577691 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577694 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577698 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577702 | orchestrator | 2026-01-08 00:57:43.577706 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-08 00:57:43.577729 | orchestrator | Thursday 08 January 2026 00:51:52 +0000 (0:00:00.641) 0:05:21.501 ****** 2026-01-08 00:57:43.577734 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577737 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577741 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577745 | orchestrator | 2026-01-08 00:57:43.577749 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-08 00:57:43.577752 | orchestrator | Thursday 08 January 2026 00:51:52 +0000 (0:00:00.296) 0:05:21.798 ****** 2026-01-08 00:57:43.577756 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577760 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577764 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577768 | orchestrator | 2026-01-08 00:57:43.577772 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-08 00:57:43.577775 | orchestrator | Thursday 08 January 2026 00:51:52 +0000 (0:00:00.324) 0:05:22.122 ****** 2026-01-08 00:57:43.577779 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577785 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577791 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577798 | orchestrator | 2026-01-08 00:57:43.577804 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-08 00:57:43.577811 | orchestrator | Thursday 08 January 2026 00:51:53 +0000 (0:00:00.301) 0:05:22.424 ****** 2026-01-08 00:57:43.577817 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577827 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577830 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577834 | orchestrator | 2026-01-08 00:57:43.577838 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-08 00:57:43.577842 | orchestrator | Thursday 08 January 2026 00:51:53 +0000 (0:00:00.470) 0:05:22.894 ****** 2026-01-08 00:57:43.577846 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577850 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577853 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577857 | orchestrator | 2026-01-08 00:57:43.577861 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-08 00:57:43.577865 | orchestrator | Thursday 08 January 2026 00:51:53 +0000 (0:00:00.295) 0:05:23.190 ****** 2026-01-08 00:57:43.577871 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577875 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577879 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577883 | orchestrator | 2026-01-08 00:57:43.577886 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-08 00:57:43.577890 | orchestrator | Thursday 08 January 2026 00:51:54 +0000 (0:00:00.289) 0:05:23.480 ****** 2026-01-08 00:57:43.577894 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.577898 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.577902 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.577905 | orchestrator | 2026-01-08 00:57:43.577909 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-08 00:57:43.577913 | orchestrator | Thursday 08 January 2026 00:51:54 +0000 (0:00:00.679) 0:05:24.159 ****** 2026-01-08 00:57:43.577917 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-08 00:57:43.577920 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:57:43.577925 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:57:43.577928 | orchestrator | 2026-01-08 00:57:43.577932 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-08 00:57:43.577936 | orchestrator | Thursday 08 January 2026 00:51:55 +0000 (0:00:00.554) 0:05:24.714 ****** 2026-01-08 00:57:43.577939 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.577943 | orchestrator | 2026-01-08 00:57:43.577947 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-08 00:57:43.577951 | orchestrator | Thursday 08 January 2026 00:51:55 +0000 (0:00:00.466) 0:05:25.181 ****** 2026-01-08 00:57:43.577955 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.577958 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.577962 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.577966 | orchestrator | 2026-01-08 00:57:43.577970 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-08 00:57:43.577973 | orchestrator | Thursday 08 January 2026 00:51:56 +0000 (0:00:00.836) 0:05:26.017 ****** 2026-01-08 00:57:43.577977 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.577982 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.577988 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.577994 | orchestrator | 2026-01-08 00:57:43.577999 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-08 00:57:43.578005 | orchestrator | Thursday 08 January 2026 00:51:57 +0000 (0:00:00.423) 0:05:26.441 ****** 2026-01-08 00:57:43.578011 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-08 00:57:43.578057 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-08 00:57:43.578061 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-08 00:57:43.578065 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-08 00:57:43.578069 | orchestrator | 2026-01-08 00:57:43.578073 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-08 00:57:43.578077 | orchestrator | Thursday 08 January 2026 00:52:06 +0000 (0:00:09.512) 0:05:35.953 ****** 2026-01-08 00:57:43.578085 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.578100 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.578106 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.578110 | orchestrator | 2026-01-08 00:57:43.578114 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-08 00:57:43.578118 | orchestrator | Thursday 08 January 2026 00:52:07 +0000 (0:00:00.325) 0:05:36.278 ****** 2026-01-08 00:57:43.578122 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-08 00:57:43.578126 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-08 00:57:43.578129 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-08 00:57:43.578133 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.578137 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-08 00:57:43.578157 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.578162 | orchestrator | 2026-01-08 00:57:43.578166 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-08 00:57:43.578170 | orchestrator | Thursday 08 January 2026 00:52:08 +0000 (0:00:01.892) 0:05:38.171 ****** 2026-01-08 00:57:43.578174 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-08 00:57:43.578177 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-08 00:57:43.578181 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-08 00:57:43.578185 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-08 00:57:43.578189 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-08 00:57:43.578193 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-08 00:57:43.578196 | orchestrator | 2026-01-08 00:57:43.578200 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-08 00:57:43.578204 | orchestrator | Thursday 08 January 2026 00:52:10 +0000 (0:00:01.272) 0:05:39.443 ****** 2026-01-08 00:57:43.578208 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.578211 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.578215 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.578219 | orchestrator | 2026-01-08 00:57:43.578223 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-08 00:57:43.578227 | orchestrator | Thursday 08 January 2026 00:52:11 +0000 (0:00:00.924) 0:05:40.368 ****** 2026-01-08 00:57:43.578231 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.578234 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.578238 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.578242 | orchestrator | 2026-01-08 00:57:43.578246 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-08 00:57:43.578250 | orchestrator | Thursday 08 January 2026 00:52:11 +0000 (0:00:00.309) 0:05:40.678 ****** 2026-01-08 00:57:43.578253 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.578257 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.578261 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.578265 | orchestrator | 2026-01-08 00:57:43.578271 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-08 00:57:43.578275 | orchestrator | Thursday 08 January 2026 00:52:11 +0000 (0:00:00.342) 0:05:41.020 ****** 2026-01-08 00:57:43.578279 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.578283 | orchestrator | 2026-01-08 00:57:43.578287 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-08 00:57:43.578290 | orchestrator | Thursday 08 January 2026 00:52:12 +0000 (0:00:00.760) 0:05:41.780 ****** 2026-01-08 00:57:43.578294 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.578298 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.578302 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.578306 | orchestrator | 2026-01-08 00:57:43.578309 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-08 00:57:43.578313 | orchestrator | Thursday 08 January 2026 00:52:12 +0000 (0:00:00.363) 0:05:42.144 ****** 2026-01-08 00:57:43.578321 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.578327 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.578338 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.578345 | orchestrator | 2026-01-08 00:57:43.578351 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-08 00:57:43.578357 | orchestrator | Thursday 08 January 2026 00:52:13 +0000 (0:00:00.374) 0:05:42.518 ****** 2026-01-08 00:57:43.578364 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.578371 | orchestrator | 2026-01-08 00:57:43.578377 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-08 00:57:43.578383 | orchestrator | Thursday 08 January 2026 00:52:14 +0000 (0:00:00.783) 0:05:43.301 ****** 2026-01-08 00:57:43.578390 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.578397 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.578404 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.578410 | orchestrator | 2026-01-08 00:57:43.578417 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-08 00:57:43.578424 | orchestrator | Thursday 08 January 2026 00:52:15 +0000 (0:00:01.560) 0:05:44.862 ****** 2026-01-08 00:57:43.578431 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.578438 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.578444 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.578450 | orchestrator | 2026-01-08 00:57:43.578454 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-08 00:57:43.578458 | orchestrator | Thursday 08 January 2026 00:52:17 +0000 (0:00:01.428) 0:05:46.290 ****** 2026-01-08 00:57:43.578461 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.578465 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.578469 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.578473 | orchestrator | 2026-01-08 00:57:43.578476 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-08 00:57:43.578480 | orchestrator | Thursday 08 January 2026 00:52:18 +0000 (0:00:01.747) 0:05:48.038 ****** 2026-01-08 00:57:43.578484 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.578488 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.578491 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.578495 | orchestrator | 2026-01-08 00:57:43.578499 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-08 00:57:43.578505 | orchestrator | Thursday 08 January 2026 00:52:20 +0000 (0:00:02.035) 0:05:50.073 ****** 2026-01-08 00:57:43.578511 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.578517 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.578523 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-08 00:57:43.578529 | orchestrator | 2026-01-08 00:57:43.578535 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-08 00:57:43.578542 | orchestrator | Thursday 08 January 2026 00:52:21 +0000 (0:00:00.372) 0:05:50.445 ****** 2026-01-08 00:57:43.578570 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-08 00:57:43.578576 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-08 00:57:43.578580 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-08 00:57:43.578583 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-08 00:57:43.578587 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-08 00:57:43.578591 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.578595 | orchestrator | 2026-01-08 00:57:43.578599 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-08 00:57:43.578607 | orchestrator | Thursday 08 January 2026 00:52:51 +0000 (0:00:29.940) 0:06:20.385 ****** 2026-01-08 00:57:43.578611 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.578614 | orchestrator | 2026-01-08 00:57:43.578618 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-08 00:57:43.578622 | orchestrator | Thursday 08 January 2026 00:52:52 +0000 (0:00:01.334) 0:06:21.720 ****** 2026-01-08 00:57:43.578626 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.578630 | orchestrator | 2026-01-08 00:57:43.578633 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-08 00:57:43.578637 | orchestrator | Thursday 08 January 2026 00:52:52 +0000 (0:00:00.347) 0:06:22.067 ****** 2026-01-08 00:57:43.578641 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.578645 | orchestrator | 2026-01-08 00:57:43.578649 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-08 00:57:43.578652 | orchestrator | Thursday 08 January 2026 00:52:52 +0000 (0:00:00.132) 0:06:22.200 ****** 2026-01-08 00:57:43.578659 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-08 00:57:43.578663 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-08 00:57:43.578666 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-08 00:57:43.578670 | orchestrator | 2026-01-08 00:57:43.578674 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-08 00:57:43.578678 | orchestrator | Thursday 08 January 2026 00:52:59 +0000 (0:00:06.836) 0:06:29.037 ****** 2026-01-08 00:57:43.578682 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-08 00:57:43.578685 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-08 00:57:43.578689 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-08 00:57:43.578693 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-08 00:57:43.578697 | orchestrator | 2026-01-08 00:57:43.578701 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-08 00:57:43.578704 | orchestrator | Thursday 08 January 2026 00:53:05 +0000 (0:00:05.315) 0:06:34.353 ****** 2026-01-08 00:57:43.578708 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.578712 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.578716 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.578719 | orchestrator | 2026-01-08 00:57:43.578723 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-08 00:57:43.578727 | orchestrator | Thursday 08 January 2026 00:53:05 +0000 (0:00:00.692) 0:06:35.045 ****** 2026-01-08 00:57:43.578731 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.578735 | orchestrator | 2026-01-08 00:57:43.578738 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-08 00:57:43.578742 | orchestrator | Thursday 08 January 2026 00:53:06 +0000 (0:00:00.795) 0:06:35.840 ****** 2026-01-08 00:57:43.578746 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.578750 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.578753 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.578757 | orchestrator | 2026-01-08 00:57:43.578761 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-08 00:57:43.578765 | orchestrator | Thursday 08 January 2026 00:53:06 +0000 (0:00:00.327) 0:06:36.167 ****** 2026-01-08 00:57:43.578769 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.578772 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.578776 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.578780 | orchestrator | 2026-01-08 00:57:43.578784 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-08 00:57:43.578788 | orchestrator | Thursday 08 January 2026 00:53:08 +0000 (0:00:01.579) 0:06:37.747 ****** 2026-01-08 00:57:43.578794 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-08 00:57:43.578798 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-08 00:57:43.578801 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-08 00:57:43.578805 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.578809 | orchestrator | 2026-01-08 00:57:43.578813 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-08 00:57:43.578817 | orchestrator | Thursday 08 January 2026 00:53:09 +0000 (0:00:00.673) 0:06:38.421 ****** 2026-01-08 00:57:43.578820 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.578824 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.578828 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.578832 | orchestrator | 2026-01-08 00:57:43.578836 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-08 00:57:43.578839 | orchestrator | 2026-01-08 00:57:43.578843 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-08 00:57:43.578847 | orchestrator | Thursday 08 January 2026 00:53:10 +0000 (0:00:00.874) 0:06:39.296 ****** 2026-01-08 00:57:43.578863 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.578868 | orchestrator | 2026-01-08 00:57:43.578872 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-08 00:57:43.578876 | orchestrator | Thursday 08 January 2026 00:53:10 +0000 (0:00:00.561) 0:06:39.857 ****** 2026-01-08 00:57:43.578880 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.578883 | orchestrator | 2026-01-08 00:57:43.578887 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-08 00:57:43.578891 | orchestrator | Thursday 08 January 2026 00:53:11 +0000 (0:00:00.742) 0:06:40.600 ****** 2026-01-08 00:57:43.578895 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.578899 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.578903 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.578906 | orchestrator | 2026-01-08 00:57:43.578910 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-08 00:57:43.578914 | orchestrator | Thursday 08 January 2026 00:53:11 +0000 (0:00:00.305) 0:06:40.905 ****** 2026-01-08 00:57:43.578918 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.578922 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.578925 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.578929 | orchestrator | 2026-01-08 00:57:43.578933 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-08 00:57:43.578937 | orchestrator | Thursday 08 January 2026 00:53:12 +0000 (0:00:00.800) 0:06:41.706 ****** 2026-01-08 00:57:43.578941 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.578944 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.578948 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.578952 | orchestrator | 2026-01-08 00:57:43.578956 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-08 00:57:43.578960 | orchestrator | Thursday 08 January 2026 00:53:13 +0000 (0:00:00.801) 0:06:42.507 ****** 2026-01-08 00:57:43.578963 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.578971 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.578975 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.578979 | orchestrator | 2026-01-08 00:57:43.578983 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-08 00:57:43.578987 | orchestrator | Thursday 08 January 2026 00:53:14 +0000 (0:00:01.349) 0:06:43.857 ****** 2026-01-08 00:57:43.578990 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.578994 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.578998 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579002 | orchestrator | 2026-01-08 00:57:43.579006 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-08 00:57:43.579012 | orchestrator | Thursday 08 January 2026 00:53:14 +0000 (0:00:00.318) 0:06:44.176 ****** 2026-01-08 00:57:43.579016 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579020 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579023 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579027 | orchestrator | 2026-01-08 00:57:43.579031 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-08 00:57:43.579035 | orchestrator | Thursday 08 January 2026 00:53:15 +0000 (0:00:00.327) 0:06:44.504 ****** 2026-01-08 00:57:43.579039 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579043 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579046 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579050 | orchestrator | 2026-01-08 00:57:43.579054 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-08 00:57:43.579058 | orchestrator | Thursday 08 January 2026 00:53:15 +0000 (0:00:00.300) 0:06:44.804 ****** 2026-01-08 00:57:43.579062 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579065 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579069 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579073 | orchestrator | 2026-01-08 00:57:43.579077 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-08 00:57:43.579081 | orchestrator | Thursday 08 January 2026 00:53:16 +0000 (0:00:01.107) 0:06:45.912 ****** 2026-01-08 00:57:43.579084 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579103 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579107 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579111 | orchestrator | 2026-01-08 00:57:43.579115 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-08 00:57:43.579119 | orchestrator | Thursday 08 January 2026 00:53:17 +0000 (0:00:00.805) 0:06:46.718 ****** 2026-01-08 00:57:43.579122 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579126 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579130 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579134 | orchestrator | 2026-01-08 00:57:43.579138 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-08 00:57:43.579141 | orchestrator | Thursday 08 January 2026 00:53:17 +0000 (0:00:00.369) 0:06:47.088 ****** 2026-01-08 00:57:43.579145 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579149 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579153 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579156 | orchestrator | 2026-01-08 00:57:43.579160 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-08 00:57:43.579164 | orchestrator | Thursday 08 January 2026 00:53:18 +0000 (0:00:00.337) 0:06:47.425 ****** 2026-01-08 00:57:43.579168 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579171 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579175 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579179 | orchestrator | 2026-01-08 00:57:43.579183 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-08 00:57:43.579186 | orchestrator | Thursday 08 January 2026 00:53:18 +0000 (0:00:00.596) 0:06:48.021 ****** 2026-01-08 00:57:43.579190 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579194 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579198 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579201 | orchestrator | 2026-01-08 00:57:43.579205 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-08 00:57:43.579209 | orchestrator | Thursday 08 January 2026 00:53:19 +0000 (0:00:00.376) 0:06:48.398 ****** 2026-01-08 00:57:43.579213 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579217 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579223 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579227 | orchestrator | 2026-01-08 00:57:43.579231 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-08 00:57:43.579234 | orchestrator | Thursday 08 January 2026 00:53:19 +0000 (0:00:00.317) 0:06:48.716 ****** 2026-01-08 00:57:43.579241 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579245 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579249 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579253 | orchestrator | 2026-01-08 00:57:43.579256 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-08 00:57:43.579260 | orchestrator | Thursday 08 January 2026 00:53:19 +0000 (0:00:00.285) 0:06:49.002 ****** 2026-01-08 00:57:43.579264 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579268 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579272 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579275 | orchestrator | 2026-01-08 00:57:43.579279 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-08 00:57:43.579283 | orchestrator | Thursday 08 January 2026 00:53:20 +0000 (0:00:00.599) 0:06:49.601 ****** 2026-01-08 00:57:43.579287 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579291 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579294 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579298 | orchestrator | 2026-01-08 00:57:43.579302 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-08 00:57:43.579306 | orchestrator | Thursday 08 January 2026 00:53:20 +0000 (0:00:00.365) 0:06:49.967 ****** 2026-01-08 00:57:43.579310 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579313 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579317 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579321 | orchestrator | 2026-01-08 00:57:43.579325 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-08 00:57:43.579329 | orchestrator | Thursday 08 January 2026 00:53:21 +0000 (0:00:00.361) 0:06:50.328 ****** 2026-01-08 00:57:43.579332 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579336 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579340 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579344 | orchestrator | 2026-01-08 00:57:43.579349 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-08 00:57:43.579353 | orchestrator | Thursday 08 January 2026 00:53:21 +0000 (0:00:00.784) 0:06:51.113 ****** 2026-01-08 00:57:43.579357 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579361 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579365 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579369 | orchestrator | 2026-01-08 00:57:43.579372 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-08 00:57:43.579376 | orchestrator | Thursday 08 January 2026 00:53:22 +0000 (0:00:00.331) 0:06:51.444 ****** 2026-01-08 00:57:43.579380 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-08 00:57:43.579384 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:57:43.579388 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:57:43.579391 | orchestrator | 2026-01-08 00:57:43.579395 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-08 00:57:43.579399 | orchestrator | Thursday 08 January 2026 00:53:22 +0000 (0:00:00.642) 0:06:52.086 ****** 2026-01-08 00:57:43.579403 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.579406 | orchestrator | 2026-01-08 00:57:43.579410 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-08 00:57:43.579414 | orchestrator | Thursday 08 January 2026 00:53:23 +0000 (0:00:00.545) 0:06:52.631 ****** 2026-01-08 00:57:43.579418 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579422 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579429 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579435 | orchestrator | 2026-01-08 00:57:43.579442 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-08 00:57:43.579449 | orchestrator | Thursday 08 January 2026 00:53:23 +0000 (0:00:00.559) 0:06:53.191 ****** 2026-01-08 00:57:43.579459 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579467 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579474 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579481 | orchestrator | 2026-01-08 00:57:43.579488 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-08 00:57:43.579495 | orchestrator | Thursday 08 January 2026 00:53:24 +0000 (0:00:00.325) 0:06:53.516 ****** 2026-01-08 00:57:43.579502 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579508 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579515 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579519 | orchestrator | 2026-01-08 00:57:43.579522 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-08 00:57:43.579526 | orchestrator | Thursday 08 January 2026 00:53:24 +0000 (0:00:00.646) 0:06:54.163 ****** 2026-01-08 00:57:43.579530 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579534 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579537 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579541 | orchestrator | 2026-01-08 00:57:43.579545 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-08 00:57:43.579549 | orchestrator | Thursday 08 January 2026 00:53:25 +0000 (0:00:00.380) 0:06:54.544 ****** 2026-01-08 00:57:43.579553 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-08 00:57:43.579556 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-08 00:57:43.579560 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-08 00:57:43.579564 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-08 00:57:43.579568 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-08 00:57:43.579576 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-08 00:57:43.579582 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-08 00:57:43.579588 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-08 00:57:43.579594 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-08 00:57:43.579600 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-08 00:57:43.579606 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-08 00:57:43.579612 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-08 00:57:43.579618 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-08 00:57:43.579625 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-08 00:57:43.579631 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-08 00:57:43.579637 | orchestrator | 2026-01-08 00:57:43.579643 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-08 00:57:43.579650 | orchestrator | Thursday 08 January 2026 00:53:28 +0000 (0:00:03.365) 0:06:57.909 ****** 2026-01-08 00:57:43.579656 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579663 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579669 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579675 | orchestrator | 2026-01-08 00:57:43.579681 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-08 00:57:43.579688 | orchestrator | Thursday 08 January 2026 00:53:29 +0000 (0:00:00.336) 0:06:58.245 ****** 2026-01-08 00:57:43.579697 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.579704 | orchestrator | 2026-01-08 00:57:43.579710 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-08 00:57:43.579721 | orchestrator | Thursday 08 January 2026 00:53:29 +0000 (0:00:00.507) 0:06:58.753 ****** 2026-01-08 00:57:43.579728 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-08 00:57:43.579734 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-08 00:57:43.579741 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-08 00:57:43.579747 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-08 00:57:43.579754 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-08 00:57:43.579760 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-08 00:57:43.579766 | orchestrator | 2026-01-08 00:57:43.579773 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-08 00:57:43.579779 | orchestrator | Thursday 08 January 2026 00:53:30 +0000 (0:00:01.410) 0:07:00.163 ****** 2026-01-08 00:57:43.579784 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.579788 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-08 00:57:43.579792 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-08 00:57:43.579796 | orchestrator | 2026-01-08 00:57:43.579799 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-08 00:57:43.579803 | orchestrator | Thursday 08 January 2026 00:53:33 +0000 (0:00:02.185) 0:07:02.348 ****** 2026-01-08 00:57:43.579807 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-08 00:57:43.579811 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-08 00:57:43.579815 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.579818 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-08 00:57:43.579822 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-08 00:57:43.579826 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.579830 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-08 00:57:43.579833 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-08 00:57:43.579837 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.579841 | orchestrator | 2026-01-08 00:57:43.579845 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-08 00:57:43.579848 | orchestrator | Thursday 08 January 2026 00:53:34 +0000 (0:00:01.374) 0:07:03.723 ****** 2026-01-08 00:57:43.579852 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.579856 | orchestrator | 2026-01-08 00:57:43.579860 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-08 00:57:43.579864 | orchestrator | Thursday 08 January 2026 00:53:36 +0000 (0:00:02.290) 0:07:06.013 ****** 2026-01-08 00:57:43.579867 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.579871 | orchestrator | 2026-01-08 00:57:43.579875 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-08 00:57:43.579879 | orchestrator | Thursday 08 January 2026 00:53:37 +0000 (0:00:00.809) 0:07:06.823 ****** 2026-01-08 00:57:43.579883 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-738668c3-85d9-5999-8ba6-58353e2d69fe', 'data_vg': 'ceph-738668c3-85d9-5999-8ba6-58353e2d69fe'}) 2026-01-08 00:57:43.579887 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e7c35fc3-220b-5a3c-9d36-601219d17f28', 'data_vg': 'ceph-e7c35fc3-220b-5a3c-9d36-601219d17f28'}) 2026-01-08 00:57:43.579895 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a2587794-ee13-56a9-b71d-149b2fd55b33', 'data_vg': 'ceph-a2587794-ee13-56a9-b71d-149b2fd55b33'}) 2026-01-08 00:57:43.579898 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3efd50ac-0c86-56a3-96dd-80e79744aaab', 'data_vg': 'ceph-3efd50ac-0c86-56a3-96dd-80e79744aaab'}) 2026-01-08 00:57:43.579902 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1538380d-5182-5482-9616-e6fa16e7f592', 'data_vg': 'ceph-1538380d-5182-5482-9616-e6fa16e7f592'}) 2026-01-08 00:57:43.579910 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-703f1367-865b-52a8-8f96-c728fe171d20', 'data_vg': 'ceph-703f1367-865b-52a8-8f96-c728fe171d20'}) 2026-01-08 00:57:43.579914 | orchestrator | 2026-01-08 00:57:43.579918 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-08 00:57:43.579921 | orchestrator | Thursday 08 January 2026 00:54:20 +0000 (0:00:43.192) 0:07:50.015 ****** 2026-01-08 00:57:43.579925 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.579929 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.579933 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.579937 | orchestrator | 2026-01-08 00:57:43.579940 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-08 00:57:43.579944 | orchestrator | Thursday 08 January 2026 00:54:21 +0000 (0:00:00.320) 0:07:50.336 ****** 2026-01-08 00:57:43.579948 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.579952 | orchestrator | 2026-01-08 00:57:43.579956 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-08 00:57:43.579959 | orchestrator | Thursday 08 January 2026 00:54:21 +0000 (0:00:00.782) 0:07:51.118 ****** 2026-01-08 00:57:43.579963 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579967 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.579971 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579975 | orchestrator | 2026-01-08 00:57:43.579981 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-08 00:57:43.579985 | orchestrator | Thursday 08 January 2026 00:54:22 +0000 (0:00:00.652) 0:07:51.771 ****** 2026-01-08 00:57:43.579989 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.579992 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.579996 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.580000 | orchestrator | 2026-01-08 00:57:43.580004 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-08 00:57:43.580008 | orchestrator | Thursday 08 January 2026 00:54:24 +0000 (0:00:02.362) 0:07:54.134 ****** 2026-01-08 00:57:43.580012 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.580015 | orchestrator | 2026-01-08 00:57:43.580019 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-08 00:57:43.580023 | orchestrator | Thursday 08 January 2026 00:54:25 +0000 (0:00:00.878) 0:07:55.012 ****** 2026-01-08 00:57:43.580027 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.580031 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.580034 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.580038 | orchestrator | 2026-01-08 00:57:43.580042 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-08 00:57:43.580046 | orchestrator | Thursday 08 January 2026 00:54:26 +0000 (0:00:01.098) 0:07:56.111 ****** 2026-01-08 00:57:43.580050 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.580053 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.580057 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.580061 | orchestrator | 2026-01-08 00:57:43.580065 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-08 00:57:43.580069 | orchestrator | Thursday 08 January 2026 00:54:27 +0000 (0:00:01.125) 0:07:57.237 ****** 2026-01-08 00:57:43.580072 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.580076 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.580080 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.580084 | orchestrator | 2026-01-08 00:57:43.580101 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-08 00:57:43.580108 | orchestrator | Thursday 08 January 2026 00:54:29 +0000 (0:00:01.659) 0:07:58.896 ****** 2026-01-08 00:57:43.580115 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580121 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580131 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.580138 | orchestrator | 2026-01-08 00:57:43.580142 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-08 00:57:43.580146 | orchestrator | Thursday 08 January 2026 00:54:30 +0000 (0:00:00.610) 0:07:59.507 ****** 2026-01-08 00:57:43.580150 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580153 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580157 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.580161 | orchestrator | 2026-01-08 00:57:43.580165 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-08 00:57:43.580168 | orchestrator | Thursday 08 January 2026 00:54:30 +0000 (0:00:00.368) 0:07:59.876 ****** 2026-01-08 00:57:43.580172 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-08 00:57:43.580176 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-01-08 00:57:43.580180 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-08 00:57:43.580183 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-08 00:57:43.580187 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-08 00:57:43.580191 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-08 00:57:43.580195 | orchestrator | 2026-01-08 00:57:43.580198 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-08 00:57:43.580202 | orchestrator | Thursday 08 January 2026 00:54:31 +0000 (0:00:01.060) 0:08:00.936 ****** 2026-01-08 00:57:43.580206 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-08 00:57:43.580210 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-08 00:57:43.580214 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-08 00:57:43.580217 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-08 00:57:43.580221 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-08 00:57:43.580228 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-08 00:57:43.580232 | orchestrator | 2026-01-08 00:57:43.580236 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-08 00:57:43.580240 | orchestrator | Thursday 08 January 2026 00:54:33 +0000 (0:00:02.262) 0:08:03.199 ****** 2026-01-08 00:57:43.580244 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-08 00:57:43.580248 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-08 00:57:43.580251 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-08 00:57:43.580255 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-08 00:57:43.580259 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-08 00:57:43.580263 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-08 00:57:43.580267 | orchestrator | 2026-01-08 00:57:43.580271 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-08 00:57:43.580274 | orchestrator | Thursday 08 January 2026 00:54:38 +0000 (0:00:05.015) 0:08:08.214 ****** 2026-01-08 00:57:43.580278 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580282 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580286 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.580290 | orchestrator | 2026-01-08 00:57:43.580294 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-08 00:57:43.580298 | orchestrator | Thursday 08 January 2026 00:54:41 +0000 (0:00:02.599) 0:08:10.814 ****** 2026-01-08 00:57:43.580301 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580305 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580309 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-08 00:57:43.580313 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.580317 | orchestrator | 2026-01-08 00:57:43.580321 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-08 00:57:43.580324 | orchestrator | Thursday 08 January 2026 00:54:54 +0000 (0:00:12.512) 0:08:23.326 ****** 2026-01-08 00:57:43.580330 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580337 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580341 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.580345 | orchestrator | 2026-01-08 00:57:43.580348 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-08 00:57:43.580352 | orchestrator | Thursday 08 January 2026 00:54:55 +0000 (0:00:01.159) 0:08:24.486 ****** 2026-01-08 00:57:43.580356 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580360 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580364 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.580367 | orchestrator | 2026-01-08 00:57:43.580371 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-08 00:57:43.580375 | orchestrator | Thursday 08 January 2026 00:54:55 +0000 (0:00:00.366) 0:08:24.852 ****** 2026-01-08 00:57:43.580379 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.580383 | orchestrator | 2026-01-08 00:57:43.580387 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-08 00:57:43.580391 | orchestrator | Thursday 08 January 2026 00:54:56 +0000 (0:00:00.549) 0:08:25.402 ****** 2026-01-08 00:57:43.580394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.580398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.580402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.580406 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580410 | orchestrator | 2026-01-08 00:57:43.580413 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-08 00:57:43.580417 | orchestrator | Thursday 08 January 2026 00:54:57 +0000 (0:00:00.961) 0:08:26.364 ****** 2026-01-08 00:57:43.580421 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580425 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580429 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.580433 | orchestrator | 2026-01-08 00:57:43.580436 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-08 00:57:43.580440 | orchestrator | Thursday 08 January 2026 00:54:57 +0000 (0:00:00.333) 0:08:26.697 ****** 2026-01-08 00:57:43.580445 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580451 | orchestrator | 2026-01-08 00:57:43.580456 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-08 00:57:43.580462 | orchestrator | Thursday 08 January 2026 00:54:57 +0000 (0:00:00.220) 0:08:26.918 ****** 2026-01-08 00:57:43.580468 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580474 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580481 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.580487 | orchestrator | 2026-01-08 00:57:43.580493 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-08 00:57:43.580499 | orchestrator | Thursday 08 January 2026 00:54:58 +0000 (0:00:00.332) 0:08:27.250 ****** 2026-01-08 00:57:43.580507 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580513 | orchestrator | 2026-01-08 00:57:43.580520 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-08 00:57:43.580526 | orchestrator | Thursday 08 January 2026 00:54:58 +0000 (0:00:00.236) 0:08:27.487 ****** 2026-01-08 00:57:43.580533 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580540 | orchestrator | 2026-01-08 00:57:43.580546 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-08 00:57:43.580553 | orchestrator | Thursday 08 January 2026 00:54:58 +0000 (0:00:00.210) 0:08:27.697 ****** 2026-01-08 00:57:43.580559 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580565 | orchestrator | 2026-01-08 00:57:43.580572 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-08 00:57:43.580577 | orchestrator | Thursday 08 January 2026 00:54:58 +0000 (0:00:00.116) 0:08:27.814 ****** 2026-01-08 00:57:43.580581 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580585 | orchestrator | 2026-01-08 00:57:43.580596 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-08 00:57:43.580600 | orchestrator | Thursday 08 January 2026 00:54:58 +0000 (0:00:00.216) 0:08:28.030 ****** 2026-01-08 00:57:43.580604 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580609 | orchestrator | 2026-01-08 00:57:43.580615 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-08 00:57:43.580621 | orchestrator | Thursday 08 January 2026 00:54:59 +0000 (0:00:00.808) 0:08:28.839 ****** 2026-01-08 00:57:43.580627 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.580633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.580639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.580646 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580652 | orchestrator | 2026-01-08 00:57:43.580659 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-08 00:57:43.580666 | orchestrator | Thursday 08 January 2026 00:55:00 +0000 (0:00:00.408) 0:08:29.247 ****** 2026-01-08 00:57:43.580672 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580678 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580684 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.580690 | orchestrator | 2026-01-08 00:57:43.580696 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-08 00:57:43.580702 | orchestrator | Thursday 08 January 2026 00:55:00 +0000 (0:00:00.325) 0:08:29.572 ****** 2026-01-08 00:57:43.580709 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580716 | orchestrator | 2026-01-08 00:57:43.580722 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-08 00:57:43.580728 | orchestrator | Thursday 08 January 2026 00:55:00 +0000 (0:00:00.233) 0:08:29.806 ****** 2026-01-08 00:57:43.580735 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580741 | orchestrator | 2026-01-08 00:57:43.580747 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-08 00:57:43.580754 | orchestrator | 2026-01-08 00:57:43.580759 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-08 00:57:43.580766 | orchestrator | Thursday 08 January 2026 00:55:01 +0000 (0:00:00.906) 0:08:30.713 ****** 2026-01-08 00:57:43.580770 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.580775 | orchestrator | 2026-01-08 00:57:43.580779 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-08 00:57:43.580783 | orchestrator | Thursday 08 January 2026 00:55:02 +0000 (0:00:01.247) 0:08:31.960 ****** 2026-01-08 00:57:43.580788 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.580795 | orchestrator | 2026-01-08 00:57:43.580801 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-08 00:57:43.580807 | orchestrator | Thursday 08 January 2026 00:55:03 +0000 (0:00:01.008) 0:08:32.969 ****** 2026-01-08 00:57:43.580813 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580819 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580826 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.580832 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.580839 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.580845 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.580852 | orchestrator | 2026-01-08 00:57:43.580858 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-08 00:57:43.580865 | orchestrator | Thursday 08 January 2026 00:55:04 +0000 (0:00:01.271) 0:08:34.241 ****** 2026-01-08 00:57:43.580870 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.580874 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.580877 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.580885 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.580889 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.580893 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.580897 | orchestrator | 2026-01-08 00:57:43.580900 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-08 00:57:43.580904 | orchestrator | Thursday 08 January 2026 00:55:05 +0000 (0:00:00.766) 0:08:35.007 ****** 2026-01-08 00:57:43.580908 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.580912 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.580916 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.580919 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.580923 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.580927 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.580931 | orchestrator | 2026-01-08 00:57:43.580934 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-08 00:57:43.580938 | orchestrator | Thursday 08 January 2026 00:55:06 +0000 (0:00:01.111) 0:08:36.118 ****** 2026-01-08 00:57:43.580942 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.580946 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.580949 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.580953 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.580957 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.580961 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.580964 | orchestrator | 2026-01-08 00:57:43.580968 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-08 00:57:43.580972 | orchestrator | Thursday 08 January 2026 00:55:07 +0000 (0:00:00.800) 0:08:36.919 ****** 2026-01-08 00:57:43.580976 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.580979 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.580983 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.580987 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.580991 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.580995 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.580998 | orchestrator | 2026-01-08 00:57:43.581002 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-08 00:57:43.581006 | orchestrator | Thursday 08 January 2026 00:55:09 +0000 (0:00:01.375) 0:08:38.294 ****** 2026-01-08 00:57:43.581010 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.581013 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.581021 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.581025 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.581028 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.581032 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.581036 | orchestrator | 2026-01-08 00:57:43.581040 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-08 00:57:43.581044 | orchestrator | Thursday 08 January 2026 00:55:09 +0000 (0:00:00.674) 0:08:38.968 ****** 2026-01-08 00:57:43.581048 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.581051 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.581055 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.581059 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.581063 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.581066 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.581070 | orchestrator | 2026-01-08 00:57:43.581074 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-08 00:57:43.581078 | orchestrator | Thursday 08 January 2026 00:55:10 +0000 (0:00:00.898) 0:08:39.866 ****** 2026-01-08 00:57:43.581082 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.581085 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.581101 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.581105 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.581109 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.581112 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.581116 | orchestrator | 2026-01-08 00:57:43.581120 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-08 00:57:43.581129 | orchestrator | Thursday 08 January 2026 00:55:11 +0000 (0:00:01.012) 0:08:40.878 ****** 2026-01-08 00:57:43.581133 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.581137 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.581141 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.581144 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.581148 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.581152 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.581155 | orchestrator | 2026-01-08 00:57:43.581159 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-08 00:57:43.581163 | orchestrator | Thursday 08 January 2026 00:55:13 +0000 (0:00:01.373) 0:08:42.252 ****** 2026-01-08 00:57:43.581169 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.581173 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.581177 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.581181 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.581185 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.581188 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.581192 | orchestrator | 2026-01-08 00:57:43.581196 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-08 00:57:43.581200 | orchestrator | Thursday 08 January 2026 00:55:13 +0000 (0:00:00.672) 0:08:42.925 ****** 2026-01-08 00:57:43.581204 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.581207 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.581211 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.581215 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.581219 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.581223 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.581226 | orchestrator | 2026-01-08 00:57:43.581230 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-08 00:57:43.581234 | orchestrator | Thursday 08 January 2026 00:55:14 +0000 (0:00:00.906) 0:08:43.832 ****** 2026-01-08 00:57:43.581238 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.581242 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.581245 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.581249 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.581253 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.581256 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.581261 | orchestrator | 2026-01-08 00:57:43.581267 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-08 00:57:43.581274 | orchestrator | Thursday 08 January 2026 00:55:15 +0000 (0:00:00.586) 0:08:44.418 ****** 2026-01-08 00:57:43.581284 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.581289 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.581295 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.581301 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.581307 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.581313 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.581319 | orchestrator | 2026-01-08 00:57:43.581325 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-08 00:57:43.581331 | orchestrator | Thursday 08 January 2026 00:55:16 +0000 (0:00:00.846) 0:08:45.265 ****** 2026-01-08 00:57:43.581337 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.581344 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.581350 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.581356 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.581362 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.581369 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.581374 | orchestrator | 2026-01-08 00:57:43.581381 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-08 00:57:43.581386 | orchestrator | Thursday 08 January 2026 00:55:16 +0000 (0:00:00.607) 0:08:45.873 ****** 2026-01-08 00:57:43.581393 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.581398 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.581406 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.581410 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.581414 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.581417 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.581421 | orchestrator | 2026-01-08 00:57:43.581425 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-08 00:57:43.581429 | orchestrator | Thursday 08 January 2026 00:55:17 +0000 (0:00:00.922) 0:08:46.796 ****** 2026-01-08 00:57:43.581433 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.581436 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.581440 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.581444 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:57:43.581448 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:57:43.581451 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:57:43.581455 | orchestrator | 2026-01-08 00:57:43.581459 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-08 00:57:43.581463 | orchestrator | Thursday 08 January 2026 00:55:18 +0000 (0:00:00.627) 0:08:47.424 ****** 2026-01-08 00:57:43.581470 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.581474 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.581478 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.581481 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.581485 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.581489 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.581493 | orchestrator | 2026-01-08 00:57:43.581497 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-08 00:57:43.581500 | orchestrator | Thursday 08 January 2026 00:55:19 +0000 (0:00:00.886) 0:08:48.311 ****** 2026-01-08 00:57:43.581504 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.581508 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.581512 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.581516 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.581519 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.581523 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.581527 | orchestrator | 2026-01-08 00:57:43.581531 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-08 00:57:43.581534 | orchestrator | Thursday 08 January 2026 00:55:19 +0000 (0:00:00.637) 0:08:48.948 ****** 2026-01-08 00:57:43.581538 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.581542 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.581546 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.581549 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.581553 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.581557 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.581561 | orchestrator | 2026-01-08 00:57:43.581564 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-08 00:57:43.581568 | orchestrator | Thursday 08 January 2026 00:55:20 +0000 (0:00:01.254) 0:08:50.203 ****** 2026-01-08 00:57:43.581572 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.581576 | orchestrator | 2026-01-08 00:57:43.581580 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-08 00:57:43.581584 | orchestrator | Thursday 08 January 2026 00:55:25 +0000 (0:00:04.689) 0:08:54.892 ****** 2026-01-08 00:57:43.581587 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.581591 | orchestrator | 2026-01-08 00:57:43.581597 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-08 00:57:43.581601 | orchestrator | Thursday 08 January 2026 00:55:27 +0000 (0:00:01.775) 0:08:56.668 ****** 2026-01-08 00:57:43.581605 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.581609 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.581613 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.581616 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.581620 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.581624 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.581630 | orchestrator | 2026-01-08 00:57:43.581634 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-08 00:57:43.581638 | orchestrator | Thursday 08 January 2026 00:55:29 +0000 (0:00:01.899) 0:08:58.567 ****** 2026-01-08 00:57:43.581641 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.581645 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.581649 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.581653 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.581656 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.581660 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.581664 | orchestrator | 2026-01-08 00:57:43.581668 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-08 00:57:43.581671 | orchestrator | Thursday 08 January 2026 00:55:30 +0000 (0:00:00.949) 0:08:59.517 ****** 2026-01-08 00:57:43.581676 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.581683 | orchestrator | 2026-01-08 00:57:43.581688 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-08 00:57:43.581694 | orchestrator | Thursday 08 January 2026 00:55:31 +0000 (0:00:01.323) 0:09:00.840 ****** 2026-01-08 00:57:43.581700 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.581706 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.581711 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.581717 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.581722 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.581728 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.581734 | orchestrator | 2026-01-08 00:57:43.581740 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-08 00:57:43.581746 | orchestrator | Thursday 08 January 2026 00:55:33 +0000 (0:00:01.865) 0:09:02.706 ****** 2026-01-08 00:57:43.581752 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.581758 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.581765 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.581771 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.581778 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.581782 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.581786 | orchestrator | 2026-01-08 00:57:43.581789 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-08 00:57:43.581793 | orchestrator | Thursday 08 January 2026 00:55:37 +0000 (0:00:03.555) 0:09:06.261 ****** 2026-01-08 00:57:43.581797 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:57:43.581801 | orchestrator | 2026-01-08 00:57:43.581805 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-08 00:57:43.581809 | orchestrator | Thursday 08 January 2026 00:55:38 +0000 (0:00:01.320) 0:09:07.581 ****** 2026-01-08 00:57:43.581812 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.581816 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.581821 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.581829 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.581839 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.581845 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.581851 | orchestrator | 2026-01-08 00:57:43.581857 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-08 00:57:43.581863 | orchestrator | Thursday 08 January 2026 00:55:39 +0000 (0:00:00.852) 0:09:08.434 ****** 2026-01-08 00:57:43.581869 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.581880 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.581886 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.581893 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:57:43.581899 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:57:43.581905 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:57:43.581916 | orchestrator | 2026-01-08 00:57:43.581920 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-08 00:57:43.581925 | orchestrator | Thursday 08 January 2026 00:55:41 +0000 (0:00:02.257) 0:09:10.691 ****** 2026-01-08 00:57:43.581931 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.581937 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.581943 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.581950 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:57:43.581956 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:57:43.581962 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:57:43.581968 | orchestrator | 2026-01-08 00:57:43.581974 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-08 00:57:43.581980 | orchestrator | 2026-01-08 00:57:43.581987 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-08 00:57:43.581993 | orchestrator | Thursday 08 January 2026 00:55:42 +0000 (0:00:01.207) 0:09:11.898 ****** 2026-01-08 00:57:43.581999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.582006 | orchestrator | 2026-01-08 00:57:43.582011 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-08 00:57:43.582040 | orchestrator | Thursday 08 January 2026 00:55:43 +0000 (0:00:00.496) 0:09:12.395 ****** 2026-01-08 00:57:43.582044 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.582048 | orchestrator | 2026-01-08 00:57:43.582051 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-08 00:57:43.582055 | orchestrator | Thursday 08 January 2026 00:55:43 +0000 (0:00:00.798) 0:09:13.193 ****** 2026-01-08 00:57:43.582062 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582066 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582070 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582074 | orchestrator | 2026-01-08 00:57:43.582077 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-08 00:57:43.582081 | orchestrator | Thursday 08 January 2026 00:55:44 +0000 (0:00:00.307) 0:09:13.501 ****** 2026-01-08 00:57:43.582085 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582102 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582107 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582111 | orchestrator | 2026-01-08 00:57:43.582115 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-08 00:57:43.582119 | orchestrator | Thursday 08 January 2026 00:55:45 +0000 (0:00:00.758) 0:09:14.260 ****** 2026-01-08 00:57:43.582123 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582126 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582130 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582134 | orchestrator | 2026-01-08 00:57:43.582138 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-08 00:57:43.582142 | orchestrator | Thursday 08 January 2026 00:55:46 +0000 (0:00:01.059) 0:09:15.319 ****** 2026-01-08 00:57:43.582145 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582149 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582153 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582157 | orchestrator | 2026-01-08 00:57:43.582160 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-08 00:57:43.582164 | orchestrator | Thursday 08 January 2026 00:55:46 +0000 (0:00:00.708) 0:09:16.028 ****** 2026-01-08 00:57:43.582168 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582172 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582176 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582180 | orchestrator | 2026-01-08 00:57:43.582183 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-08 00:57:43.582187 | orchestrator | Thursday 08 January 2026 00:55:47 +0000 (0:00:00.348) 0:09:16.376 ****** 2026-01-08 00:57:43.582191 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582202 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582205 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582209 | orchestrator | 2026-01-08 00:57:43.582213 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-08 00:57:43.582217 | orchestrator | Thursday 08 January 2026 00:55:47 +0000 (0:00:00.301) 0:09:16.677 ****** 2026-01-08 00:57:43.582221 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582224 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582228 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582232 | orchestrator | 2026-01-08 00:57:43.582236 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-08 00:57:43.582240 | orchestrator | Thursday 08 January 2026 00:55:48 +0000 (0:00:00.567) 0:09:17.244 ****** 2026-01-08 00:57:43.582247 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582253 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582259 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582265 | orchestrator | 2026-01-08 00:57:43.582271 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-08 00:57:43.582278 | orchestrator | Thursday 08 January 2026 00:55:48 +0000 (0:00:00.773) 0:09:18.018 ****** 2026-01-08 00:57:43.582288 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582294 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582300 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582306 | orchestrator | 2026-01-08 00:57:43.582312 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-08 00:57:43.582317 | orchestrator | Thursday 08 January 2026 00:55:49 +0000 (0:00:00.667) 0:09:18.685 ****** 2026-01-08 00:57:43.582323 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582329 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582335 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582341 | orchestrator | 2026-01-08 00:57:43.582347 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-08 00:57:43.582353 | orchestrator | Thursday 08 January 2026 00:55:49 +0000 (0:00:00.319) 0:09:19.005 ****** 2026-01-08 00:57:43.582359 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582371 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582377 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582383 | orchestrator | 2026-01-08 00:57:43.582388 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-08 00:57:43.582394 | orchestrator | Thursday 08 January 2026 00:55:50 +0000 (0:00:00.580) 0:09:19.586 ****** 2026-01-08 00:57:43.582400 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582405 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582411 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582416 | orchestrator | 2026-01-08 00:57:43.582423 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-08 00:57:43.582430 | orchestrator | Thursday 08 January 2026 00:55:50 +0000 (0:00:00.339) 0:09:19.925 ****** 2026-01-08 00:57:43.582435 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582441 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582447 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582452 | orchestrator | 2026-01-08 00:57:43.582458 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-08 00:57:43.582465 | orchestrator | Thursday 08 January 2026 00:55:51 +0000 (0:00:00.343) 0:09:20.269 ****** 2026-01-08 00:57:43.582471 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582477 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582484 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582491 | orchestrator | 2026-01-08 00:57:43.582498 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-08 00:57:43.582506 | orchestrator | Thursday 08 January 2026 00:55:51 +0000 (0:00:00.338) 0:09:20.608 ****** 2026-01-08 00:57:43.582512 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582518 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582530 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582536 | orchestrator | 2026-01-08 00:57:43.582542 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-08 00:57:43.582548 | orchestrator | Thursday 08 January 2026 00:55:51 +0000 (0:00:00.587) 0:09:21.196 ****** 2026-01-08 00:57:43.582555 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582561 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582572 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582579 | orchestrator | 2026-01-08 00:57:43.582586 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-08 00:57:43.582593 | orchestrator | Thursday 08 January 2026 00:55:52 +0000 (0:00:00.349) 0:09:21.545 ****** 2026-01-08 00:57:43.582600 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582607 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582615 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582622 | orchestrator | 2026-01-08 00:57:43.582629 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-08 00:57:43.582636 | orchestrator | Thursday 08 January 2026 00:55:52 +0000 (0:00:00.321) 0:09:21.867 ****** 2026-01-08 00:57:43.582642 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582649 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582657 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582664 | orchestrator | 2026-01-08 00:57:43.582671 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-08 00:57:43.582678 | orchestrator | Thursday 08 January 2026 00:55:52 +0000 (0:00:00.357) 0:09:22.224 ****** 2026-01-08 00:57:43.582685 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.582692 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.582698 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.582705 | orchestrator | 2026-01-08 00:57:43.582712 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-08 00:57:43.582721 | orchestrator | Thursday 08 January 2026 00:55:53 +0000 (0:00:00.859) 0:09:23.084 ****** 2026-01-08 00:57:43.582726 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.582732 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.582741 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-08 00:57:43.582748 | orchestrator | 2026-01-08 00:57:43.582754 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-08 00:57:43.582761 | orchestrator | Thursday 08 January 2026 00:55:54 +0000 (0:00:00.400) 0:09:23.485 ****** 2026-01-08 00:57:43.582766 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.582774 | orchestrator | 2026-01-08 00:57:43.582780 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-08 00:57:43.582787 | orchestrator | Thursday 08 January 2026 00:55:56 +0000 (0:00:02.232) 0:09:25.717 ****** 2026-01-08 00:57:43.582796 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-08 00:57:43.582803 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.582809 | orchestrator | 2026-01-08 00:57:43.582815 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-08 00:57:43.582821 | orchestrator | Thursday 08 January 2026 00:55:56 +0000 (0:00:00.223) 0:09:25.940 ****** 2026-01-08 00:57:43.582829 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-08 00:57:43.582836 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-08 00:57:43.582849 | orchestrator | 2026-01-08 00:57:43.582856 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-08 00:57:43.582863 | orchestrator | Thursday 08 January 2026 00:56:04 +0000 (0:00:07.711) 0:09:33.652 ****** 2026-01-08 00:57:43.582876 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-08 00:57:43.582883 | orchestrator | 2026-01-08 00:57:43.582889 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-08 00:57:43.582896 | orchestrator | Thursday 08 January 2026 00:56:08 +0000 (0:00:03.811) 0:09:37.464 ****** 2026-01-08 00:57:43.582903 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.582910 | orchestrator | 2026-01-08 00:57:43.582918 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-08 00:57:43.582925 | orchestrator | Thursday 08 January 2026 00:56:08 +0000 (0:00:00.581) 0:09:38.045 ****** 2026-01-08 00:57:43.582932 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-08 00:57:43.582939 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-08 00:57:43.582945 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-08 00:57:43.582952 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-08 00:57:43.582959 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-08 00:57:43.582966 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-08 00:57:43.582973 | orchestrator | 2026-01-08 00:57:43.582979 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-08 00:57:43.582985 | orchestrator | Thursday 08 January 2026 00:56:09 +0000 (0:00:01.179) 0:09:39.224 ****** 2026-01-08 00:57:43.582992 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.582998 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-08 00:57:43.583005 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-08 00:57:43.583011 | orchestrator | 2026-01-08 00:57:43.583022 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-08 00:57:43.583030 | orchestrator | Thursday 08 January 2026 00:56:12 +0000 (0:00:02.645) 0:09:41.870 ****** 2026-01-08 00:57:43.583037 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-08 00:57:43.583044 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-08 00:57:43.583051 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.583058 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-08 00:57:43.583065 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-08 00:57:43.583070 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-08 00:57:43.583077 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-08 00:57:43.583083 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.583103 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.583110 | orchestrator | 2026-01-08 00:57:43.583116 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-08 00:57:43.583122 | orchestrator | Thursday 08 January 2026 00:56:14 +0000 (0:00:01.596) 0:09:43.466 ****** 2026-01-08 00:57:43.583127 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.583134 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.583140 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.583146 | orchestrator | 2026-01-08 00:57:43.583152 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-08 00:57:43.583159 | orchestrator | Thursday 08 January 2026 00:56:17 +0000 (0:00:02.809) 0:09:46.276 ****** 2026-01-08 00:57:43.583165 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.583170 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.583176 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.583182 | orchestrator | 2026-01-08 00:57:43.583189 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-08 00:57:43.583201 | orchestrator | Thursday 08 January 2026 00:56:17 +0000 (0:00:00.324) 0:09:46.601 ****** 2026-01-08 00:57:43.583208 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.583214 | orchestrator | 2026-01-08 00:57:43.583220 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-08 00:57:43.583225 | orchestrator | Thursday 08 January 2026 00:56:18 +0000 (0:00:00.802) 0:09:47.404 ****** 2026-01-08 00:57:43.583231 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.583237 | orchestrator | 2026-01-08 00:57:43.583243 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-08 00:57:43.583249 | orchestrator | Thursday 08 January 2026 00:56:18 +0000 (0:00:00.545) 0:09:47.949 ****** 2026-01-08 00:57:43.583254 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.583259 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.583265 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.583272 | orchestrator | 2026-01-08 00:57:43.583277 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-08 00:57:43.583284 | orchestrator | Thursday 08 January 2026 00:56:19 +0000 (0:00:01.227) 0:09:49.177 ****** 2026-01-08 00:57:43.583290 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.583296 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.583302 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.583308 | orchestrator | 2026-01-08 00:57:43.583314 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-08 00:57:43.583319 | orchestrator | Thursday 08 January 2026 00:56:21 +0000 (0:00:01.744) 0:09:50.921 ****** 2026-01-08 00:57:43.583325 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.583331 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.583337 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.583343 | orchestrator | 2026-01-08 00:57:43.583349 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-08 00:57:43.583355 | orchestrator | Thursday 08 January 2026 00:56:23 +0000 (0:00:01.935) 0:09:52.856 ****** 2026-01-08 00:57:43.583363 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.583375 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.583380 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.583386 | orchestrator | 2026-01-08 00:57:43.583392 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-08 00:57:43.583397 | orchestrator | Thursday 08 January 2026 00:56:25 +0000 (0:00:02.383) 0:09:55.240 ****** 2026-01-08 00:57:43.583404 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.583409 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.583415 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.583421 | orchestrator | 2026-01-08 00:57:43.583426 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-08 00:57:43.583432 | orchestrator | Thursday 08 January 2026 00:56:27 +0000 (0:00:01.510) 0:09:56.750 ****** 2026-01-08 00:57:43.583438 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.583444 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.583450 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.583456 | orchestrator | 2026-01-08 00:57:43.583462 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-08 00:57:43.583467 | orchestrator | Thursday 08 January 2026 00:56:28 +0000 (0:00:00.683) 0:09:57.434 ****** 2026-01-08 00:57:43.583474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.583479 | orchestrator | 2026-01-08 00:57:43.583485 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-08 00:57:43.583490 | orchestrator | Thursday 08 January 2026 00:56:28 +0000 (0:00:00.698) 0:09:58.132 ****** 2026-01-08 00:57:43.583496 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.583508 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.583513 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.583519 | orchestrator | 2026-01-08 00:57:43.583526 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-08 00:57:43.583531 | orchestrator | Thursday 08 January 2026 00:56:29 +0000 (0:00:00.401) 0:09:58.534 ****** 2026-01-08 00:57:43.583537 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.583544 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.583554 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.583559 | orchestrator | 2026-01-08 00:57:43.583565 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-08 00:57:43.583571 | orchestrator | Thursday 08 January 2026 00:56:30 +0000 (0:00:01.342) 0:09:59.876 ****** 2026-01-08 00:57:43.583577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.583583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.583589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.583595 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.583601 | orchestrator | 2026-01-08 00:57:43.583606 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-08 00:57:43.583613 | orchestrator | Thursday 08 January 2026 00:56:31 +0000 (0:00:01.006) 0:10:00.883 ****** 2026-01-08 00:57:43.583618 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.583624 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.583630 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.583636 | orchestrator | 2026-01-08 00:57:43.583642 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-08 00:57:43.583649 | orchestrator | 2026-01-08 00:57:43.583655 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-08 00:57:43.583660 | orchestrator | Thursday 08 January 2026 00:56:32 +0000 (0:00:00.903) 0:10:01.787 ****** 2026-01-08 00:57:43.583666 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.583673 | orchestrator | 2026-01-08 00:57:43.583679 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-08 00:57:43.583685 | orchestrator | Thursday 08 January 2026 00:56:33 +0000 (0:00:00.679) 0:10:02.467 ****** 2026-01-08 00:57:43.583692 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.583697 | orchestrator | 2026-01-08 00:57:43.583702 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-08 00:57:43.583707 | orchestrator | Thursday 08 January 2026 00:56:34 +0000 (0:00:01.299) 0:10:03.766 ****** 2026-01-08 00:57:43.583713 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.583719 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.583725 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.583730 | orchestrator | 2026-01-08 00:57:43.583736 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-08 00:57:43.583741 | orchestrator | Thursday 08 January 2026 00:56:35 +0000 (0:00:00.487) 0:10:04.254 ****** 2026-01-08 00:57:43.583746 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.583752 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.583758 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.583764 | orchestrator | 2026-01-08 00:57:43.583769 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-08 00:57:43.583775 | orchestrator | Thursday 08 January 2026 00:56:35 +0000 (0:00:00.885) 0:10:05.140 ****** 2026-01-08 00:57:43.583781 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.583788 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.583794 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.583800 | orchestrator | 2026-01-08 00:57:43.583806 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-08 00:57:43.583811 | orchestrator | Thursday 08 January 2026 00:56:36 +0000 (0:00:00.998) 0:10:06.139 ****** 2026-01-08 00:57:43.583827 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.583833 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.583838 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.583843 | orchestrator | 2026-01-08 00:57:43.583849 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-08 00:57:43.583854 | orchestrator | Thursday 08 January 2026 00:56:37 +0000 (0:00:00.786) 0:10:06.926 ****** 2026-01-08 00:57:43.583860 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.583866 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.583872 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.583878 | orchestrator | 2026-01-08 00:57:43.583892 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-08 00:57:43.583898 | orchestrator | Thursday 08 January 2026 00:56:38 +0000 (0:00:00.404) 0:10:07.330 ****** 2026-01-08 00:57:43.583905 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.583912 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.583919 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.583925 | orchestrator | 2026-01-08 00:57:43.583931 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-08 00:57:43.583937 | orchestrator | Thursday 08 January 2026 00:56:38 +0000 (0:00:00.384) 0:10:07.714 ****** 2026-01-08 00:57:43.583942 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.583948 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.583954 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.583960 | orchestrator | 2026-01-08 00:57:43.583966 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-08 00:57:43.583972 | orchestrator | Thursday 08 January 2026 00:56:39 +0000 (0:00:00.688) 0:10:08.402 ****** 2026-01-08 00:57:43.583979 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.583984 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.583990 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.583996 | orchestrator | 2026-01-08 00:57:43.584002 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-08 00:57:43.584008 | orchestrator | Thursday 08 January 2026 00:56:39 +0000 (0:00:00.736) 0:10:09.139 ****** 2026-01-08 00:57:43.584015 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.584021 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.584027 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.584033 | orchestrator | 2026-01-08 00:57:43.584039 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-08 00:57:43.584044 | orchestrator | Thursday 08 January 2026 00:56:40 +0000 (0:00:00.747) 0:10:09.886 ****** 2026-01-08 00:57:43.584051 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584057 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.584064 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.584071 | orchestrator | 2026-01-08 00:57:43.584076 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-08 00:57:43.584087 | orchestrator | Thursday 08 January 2026 00:56:40 +0000 (0:00:00.306) 0:10:10.193 ****** 2026-01-08 00:57:43.584112 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584118 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.584123 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.584129 | orchestrator | 2026-01-08 00:57:43.584135 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-08 00:57:43.584141 | orchestrator | Thursday 08 January 2026 00:56:41 +0000 (0:00:00.645) 0:10:10.838 ****** 2026-01-08 00:57:43.584147 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.584154 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.584160 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.584166 | orchestrator | 2026-01-08 00:57:43.584173 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-08 00:57:43.584180 | orchestrator | Thursday 08 January 2026 00:56:42 +0000 (0:00:00.426) 0:10:11.264 ****** 2026-01-08 00:57:43.584186 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.584197 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.584204 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.584210 | orchestrator | 2026-01-08 00:57:43.584216 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-08 00:57:43.584222 | orchestrator | Thursday 08 January 2026 00:56:42 +0000 (0:00:00.440) 0:10:11.704 ****** 2026-01-08 00:57:43.584228 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.584234 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.584240 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.584246 | orchestrator | 2026-01-08 00:57:43.584253 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-08 00:57:43.584259 | orchestrator | Thursday 08 January 2026 00:56:42 +0000 (0:00:00.364) 0:10:12.069 ****** 2026-01-08 00:57:43.584265 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584271 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.584277 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.584283 | orchestrator | 2026-01-08 00:57:43.584290 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-08 00:57:43.584296 | orchestrator | Thursday 08 January 2026 00:56:43 +0000 (0:00:00.613) 0:10:12.683 ****** 2026-01-08 00:57:43.584303 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584308 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.584315 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.584320 | orchestrator | 2026-01-08 00:57:43.584326 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-08 00:57:43.584331 | orchestrator | Thursday 08 January 2026 00:56:43 +0000 (0:00:00.350) 0:10:13.034 ****** 2026-01-08 00:57:43.584337 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.584343 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584348 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.584353 | orchestrator | 2026-01-08 00:57:43.584359 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-08 00:57:43.584365 | orchestrator | Thursday 08 January 2026 00:56:44 +0000 (0:00:00.376) 0:10:13.410 ****** 2026-01-08 00:57:43.584370 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.584376 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.584382 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.584388 | orchestrator | 2026-01-08 00:57:43.584393 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-08 00:57:43.584399 | orchestrator | Thursday 08 January 2026 00:56:44 +0000 (0:00:00.402) 0:10:13.813 ****** 2026-01-08 00:57:43.584404 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.584410 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.584416 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.584423 | orchestrator | 2026-01-08 00:57:43.584429 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-08 00:57:43.584435 | orchestrator | Thursday 08 January 2026 00:56:45 +0000 (0:00:00.841) 0:10:14.654 ****** 2026-01-08 00:57:43.584442 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.584448 | orchestrator | 2026-01-08 00:57:43.584455 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-08 00:57:43.584468 | orchestrator | Thursday 08 January 2026 00:56:45 +0000 (0:00:00.531) 0:10:15.186 ****** 2026-01-08 00:57:43.584475 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.584481 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-08 00:57:43.584487 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-08 00:57:43.584493 | orchestrator | 2026-01-08 00:57:43.584499 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-08 00:57:43.584505 | orchestrator | Thursday 08 January 2026 00:56:48 +0000 (0:00:02.295) 0:10:17.482 ****** 2026-01-08 00:57:43.584512 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-08 00:57:43.584518 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-08 00:57:43.584533 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.584540 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-08 00:57:43.584546 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-08 00:57:43.584553 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.584559 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-08 00:57:43.584566 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-08 00:57:43.584572 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.584578 | orchestrator | 2026-01-08 00:57:43.584585 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-08 00:57:43.584591 | orchestrator | Thursday 08 January 2026 00:56:49 +0000 (0:00:01.540) 0:10:19.023 ****** 2026-01-08 00:57:43.584597 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584604 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.584610 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.584616 | orchestrator | 2026-01-08 00:57:43.584622 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-08 00:57:43.584628 | orchestrator | Thursday 08 January 2026 00:56:50 +0000 (0:00:00.325) 0:10:19.348 ****** 2026-01-08 00:57:43.584638 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.584642 | orchestrator | 2026-01-08 00:57:43.584680 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-08 00:57:43.584686 | orchestrator | Thursday 08 January 2026 00:56:50 +0000 (0:00:00.558) 0:10:19.907 ****** 2026-01-08 00:57:43.584690 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.584695 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.584699 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.584703 | orchestrator | 2026-01-08 00:57:43.584707 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-08 00:57:43.584711 | orchestrator | Thursday 08 January 2026 00:56:52 +0000 (0:00:01.393) 0:10:21.300 ****** 2026-01-08 00:57:43.584714 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.584718 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-08 00:57:43.584722 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.584726 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-08 00:57:43.584730 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.584734 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-08 00:57:43.584738 | orchestrator | 2026-01-08 00:57:43.584741 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-08 00:57:43.584745 | orchestrator | Thursday 08 January 2026 00:56:56 +0000 (0:00:04.774) 0:10:26.075 ****** 2026-01-08 00:57:43.584749 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.584753 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-08 00:57:43.584757 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.584760 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-08 00:57:43.584764 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:57:43.584772 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-08 00:57:43.584776 | orchestrator | 2026-01-08 00:57:43.584780 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-08 00:57:43.584786 | orchestrator | Thursday 08 January 2026 00:56:59 +0000 (0:00:02.580) 0:10:28.656 ****** 2026-01-08 00:57:43.584793 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-08 00:57:43.584797 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.584801 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-08 00:57:43.584805 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.584808 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-08 00:57:43.584812 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.584817 | orchestrator | 2026-01-08 00:57:43.584823 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-08 00:57:43.584832 | orchestrator | Thursday 08 January 2026 00:57:00 +0000 (0:00:01.381) 0:10:30.037 ****** 2026-01-08 00:57:43.584836 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-08 00:57:43.584840 | orchestrator | 2026-01-08 00:57:43.584844 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-08 00:57:43.584848 | orchestrator | Thursday 08 January 2026 00:57:01 +0000 (0:00:00.228) 0:10:30.265 ****** 2026-01-08 00:57:43.584852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584871 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584875 | orchestrator | 2026-01-08 00:57:43.584879 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-08 00:57:43.584883 | orchestrator | Thursday 08 January 2026 00:57:02 +0000 (0:00:01.183) 0:10:31.449 ****** 2026-01-08 00:57:43.584886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-08 00:57:43.584909 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584915 | orchestrator | 2026-01-08 00:57:43.584921 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-08 00:57:43.584927 | orchestrator | Thursday 08 January 2026 00:57:02 +0000 (0:00:00.607) 0:10:32.057 ****** 2026-01-08 00:57:43.584934 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-08 00:57:43.584938 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-08 00:57:43.584945 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-08 00:57:43.584949 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-08 00:57:43.584952 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-08 00:57:43.584956 | orchestrator | 2026-01-08 00:57:43.584960 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-08 00:57:43.584964 | orchestrator | Thursday 08 January 2026 00:57:31 +0000 (0:00:29.077) 0:11:01.134 ****** 2026-01-08 00:57:43.584968 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584971 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.584975 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.584979 | orchestrator | 2026-01-08 00:57:43.584983 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-08 00:57:43.584986 | orchestrator | Thursday 08 January 2026 00:57:32 +0000 (0:00:00.327) 0:11:01.461 ****** 2026-01-08 00:57:43.584990 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.584994 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.584998 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.585002 | orchestrator | 2026-01-08 00:57:43.585005 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-08 00:57:43.585009 | orchestrator | Thursday 08 January 2026 00:57:32 +0000 (0:00:00.311) 0:11:01.773 ****** 2026-01-08 00:57:43.585013 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.585017 | orchestrator | 2026-01-08 00:57:43.585021 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-08 00:57:43.585024 | orchestrator | Thursday 08 January 2026 00:57:33 +0000 (0:00:00.797) 0:11:02.571 ****** 2026-01-08 00:57:43.585028 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.585032 | orchestrator | 2026-01-08 00:57:43.585038 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-08 00:57:43.585042 | orchestrator | Thursday 08 January 2026 00:57:33 +0000 (0:00:00.543) 0:11:03.114 ****** 2026-01-08 00:57:43.585046 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.585049 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.585055 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.585061 | orchestrator | 2026-01-08 00:57:43.585067 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-08 00:57:43.585076 | orchestrator | Thursday 08 January 2026 00:57:35 +0000 (0:00:01.228) 0:11:04.343 ****** 2026-01-08 00:57:43.585084 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.585103 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.585110 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.585116 | orchestrator | 2026-01-08 00:57:43.585122 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-08 00:57:43.585128 | orchestrator | Thursday 08 January 2026 00:57:36 +0000 (0:00:01.478) 0:11:05.821 ****** 2026-01-08 00:57:43.585134 | orchestrator | changed: [testbed-node-4] 2026-01-08 00:57:43.585140 | orchestrator | changed: [testbed-node-3] 2026-01-08 00:57:43.585146 | orchestrator | changed: [testbed-node-5] 2026-01-08 00:57:43.585152 | orchestrator | 2026-01-08 00:57:43.585157 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-08 00:57:43.585163 | orchestrator | Thursday 08 January 2026 00:57:38 +0000 (0:00:01.695) 0:11:07.517 ****** 2026-01-08 00:57:43.585169 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.585180 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.585186 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-08 00:57:43.585192 | orchestrator | 2026-01-08 00:57:43.585201 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-08 00:57:43.585207 | orchestrator | Thursday 08 January 2026 00:57:40 +0000 (0:00:02.091) 0:11:09.608 ****** 2026-01-08 00:57:43.585214 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.585220 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.585227 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.585233 | orchestrator | 2026-01-08 00:57:43.585239 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-08 00:57:43.585245 | orchestrator | Thursday 08 January 2026 00:57:40 +0000 (0:00:00.370) 0:11:09.978 ****** 2026-01-08 00:57:43.585251 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:57:43.585257 | orchestrator | 2026-01-08 00:57:43.585263 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-08 00:57:43.585271 | orchestrator | Thursday 08 January 2026 00:57:41 +0000 (0:00:00.490) 0:11:10.469 ****** 2026-01-08 00:57:43.585278 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.585285 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.585292 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.585298 | orchestrator | 2026-01-08 00:57:43.585304 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-08 00:57:43.585311 | orchestrator | Thursday 08 January 2026 00:57:41 +0000 (0:00:00.472) 0:11:10.941 ****** 2026-01-08 00:57:43.585317 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.585324 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:57:43.585330 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:57:43.585336 | orchestrator | 2026-01-08 00:57:43.585342 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-08 00:57:43.585349 | orchestrator | Thursday 08 January 2026 00:57:42 +0000 (0:00:00.313) 0:11:11.254 ****** 2026-01-08 00:57:43.585355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:57:43.585362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:57:43.585368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:57:43.585373 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:57:43.585377 | orchestrator | 2026-01-08 00:57:43.585381 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-08 00:57:43.585385 | orchestrator | Thursday 08 January 2026 00:57:42 +0000 (0:00:00.615) 0:11:11.869 ****** 2026-01-08 00:57:43.585389 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:57:43.585392 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:57:43.585396 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:57:43.585400 | orchestrator | 2026-01-08 00:57:43.585404 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:57:43.585408 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-08 00:57:43.585412 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-08 00:57:43.585416 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-08 00:57:43.585420 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-08 00:57:43.585424 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-08 00:57:43.585436 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-08 00:57:43.585440 | orchestrator | 2026-01-08 00:57:43.585444 | orchestrator | 2026-01-08 00:57:43.585448 | orchestrator | 2026-01-08 00:57:43.585451 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:57:43.585455 | orchestrator | Thursday 08 January 2026 00:57:42 +0000 (0:00:00.246) 0:11:12.116 ****** 2026-01-08 00:57:43.585459 | orchestrator | =============================================================================== 2026-01-08 00:57:43.585463 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.94s 2026-01-08 00:57:43.585467 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.19s 2026-01-08 00:57:43.585470 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 29.94s 2026-01-08 00:57:43.585474 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.08s 2026-01-08 00:57:43.585478 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.68s 2026-01-08 00:57:43.585481 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.34s 2026-01-08 00:57:43.585485 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.51s 2026-01-08 00:57:43.585489 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.51s 2026-01-08 00:57:43.585493 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.00s 2026-01-08 00:57:43.585496 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.71s 2026-01-08 00:57:43.585500 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.85s 2026-01-08 00:57:43.585504 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.84s 2026-01-08 00:57:43.585508 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.32s 2026-01-08 00:57:43.585515 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 5.02s 2026-01-08 00:57:43.585518 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.77s 2026-01-08 00:57:43.585523 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.69s 2026-01-08 00:57:43.585529 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.88s 2026-01-08 00:57:43.585536 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.81s 2026-01-08 00:57:43.585542 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.65s 2026-01-08 00:57:43.585548 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.56s 2026-01-08 00:57:43.585554 | orchestrator | 2026-01-08 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:46.611583 | orchestrator | 2026-01-08 00:57:46 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:57:46.613315 | orchestrator | 2026-01-08 00:57:46 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:46.615003 | orchestrator | 2026-01-08 00:57:46 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:46.615044 | orchestrator | 2026-01-08 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:49.660492 | orchestrator | 2026-01-08 00:57:49 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:57:49.662169 | orchestrator | 2026-01-08 00:57:49 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:49.662221 | orchestrator | 2026-01-08 00:57:49 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:49.662228 | orchestrator | 2026-01-08 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:52.709022 | orchestrator | 2026-01-08 00:57:52 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:57:52.712974 | orchestrator | 2026-01-08 00:57:52 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:52.714297 | orchestrator | 2026-01-08 00:57:52 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:52.714885 | orchestrator | 2026-01-08 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:55.767884 | orchestrator | 2026-01-08 00:57:55 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:57:55.769773 | orchestrator | 2026-01-08 00:57:55 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:55.772037 | orchestrator | 2026-01-08 00:57:55 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:55.772123 | orchestrator | 2026-01-08 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:57:58.829437 | orchestrator | 2026-01-08 00:57:58 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:57:58.831390 | orchestrator | 2026-01-08 00:57:58 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:57:58.833669 | orchestrator | 2026-01-08 00:57:58 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:57:58.833709 | orchestrator | 2026-01-08 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:01.883966 | orchestrator | 2026-01-08 00:58:01 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:01.885954 | orchestrator | 2026-01-08 00:58:01 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:01.887975 | orchestrator | 2026-01-08 00:58:01 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:01.888032 | orchestrator | 2026-01-08 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:04.931862 | orchestrator | 2026-01-08 00:58:04 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:04.932177 | orchestrator | 2026-01-08 00:58:04 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:04.933907 | orchestrator | 2026-01-08 00:58:04 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:04.934449 | orchestrator | 2026-01-08 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:07.976781 | orchestrator | 2026-01-08 00:58:07 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:07.979488 | orchestrator | 2026-01-08 00:58:07 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:07.981287 | orchestrator | 2026-01-08 00:58:07 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:07.981666 | orchestrator | 2026-01-08 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:11.026602 | orchestrator | 2026-01-08 00:58:11 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:11.029133 | orchestrator | 2026-01-08 00:58:11 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:11.032269 | orchestrator | 2026-01-08 00:58:11 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:11.032327 | orchestrator | 2026-01-08 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:14.075644 | orchestrator | 2026-01-08 00:58:14 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:14.077900 | orchestrator | 2026-01-08 00:58:14 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:14.079854 | orchestrator | 2026-01-08 00:58:14 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:14.080086 | orchestrator | 2026-01-08 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:17.132302 | orchestrator | 2026-01-08 00:58:17 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:17.133678 | orchestrator | 2026-01-08 00:58:17 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:17.135637 | orchestrator | 2026-01-08 00:58:17 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:17.135682 | orchestrator | 2026-01-08 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:20.187760 | orchestrator | 2026-01-08 00:58:20 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:20.189336 | orchestrator | 2026-01-08 00:58:20 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:20.190995 | orchestrator | 2026-01-08 00:58:20 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:20.191118 | orchestrator | 2026-01-08 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:23.243368 | orchestrator | 2026-01-08 00:58:23 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:23.245042 | orchestrator | 2026-01-08 00:58:23 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:23.246730 | orchestrator | 2026-01-08 00:58:23 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:23.246785 | orchestrator | 2026-01-08 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:26.292215 | orchestrator | 2026-01-08 00:58:26 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:26.294203 | orchestrator | 2026-01-08 00:58:26 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:26.296139 | orchestrator | 2026-01-08 00:58:26 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:26.296191 | orchestrator | 2026-01-08 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:29.346737 | orchestrator | 2026-01-08 00:58:29 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:29.348215 | orchestrator | 2026-01-08 00:58:29 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state STARTED 2026-01-08 00:58:29.350059 | orchestrator | 2026-01-08 00:58:29 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:29.350607 | orchestrator | 2026-01-08 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:32.402279 | orchestrator | 2026-01-08 00:58:32 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:32.404726 | orchestrator | 2026-01-08 00:58:32 | INFO  | Task 3dece209-b716-4b47-95b8-1722413518b4 is in state SUCCESS 2026-01-08 00:58:32.406523 | orchestrator | 2026-01-08 00:58:32.406557 | orchestrator | 2026-01-08 00:58:32.406564 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:58:32.406570 | orchestrator | 2026-01-08 00:58:32.406574 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 00:58:32.406579 | orchestrator | Thursday 08 January 2026 00:55:58 +0000 (0:00:00.270) 0:00:00.270 ****** 2026-01-08 00:58:32.406597 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:58:32.406602 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:58:32.406607 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:58:32.406611 | orchestrator | 2026-01-08 00:58:32.406616 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 00:58:32.406620 | orchestrator | Thursday 08 January 2026 00:55:58 +0000 (0:00:00.319) 0:00:00.590 ****** 2026-01-08 00:58:32.406631 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-08 00:58:32.406636 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-08 00:58:32.406640 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-08 00:58:32.406644 | orchestrator | 2026-01-08 00:58:32.406649 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-08 00:58:32.406653 | orchestrator | 2026-01-08 00:58:32.406658 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-08 00:58:32.406662 | orchestrator | Thursday 08 January 2026 00:55:58 +0000 (0:00:00.460) 0:00:01.051 ****** 2026-01-08 00:58:32.406667 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:58:32.406671 | orchestrator | 2026-01-08 00:58:32.406676 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-08 00:58:32.406680 | orchestrator | Thursday 08 January 2026 00:55:59 +0000 (0:00:00.506) 0:00:01.558 ****** 2026-01-08 00:58:32.406685 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-08 00:58:32.406689 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-08 00:58:32.406694 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-08 00:58:32.406698 | orchestrator | 2026-01-08 00:58:32.406702 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-08 00:58:32.406707 | orchestrator | Thursday 08 January 2026 00:56:00 +0000 (0:00:00.751) 0:00:02.310 ****** 2026-01-08 00:58:32.406712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.406719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.406732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.406745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.406751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.406757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.406764 | orchestrator | 2026-01-08 00:58:32.406769 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-08 00:58:32.406773 | orchestrator | Thursday 08 January 2026 00:56:01 +0000 (0:00:01.763) 0:00:04.073 ****** 2026-01-08 00:58:32.406818 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:58:32.406825 | orchestrator | 2026-01-08 00:58:32.406832 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-08 00:58:32.406843 | orchestrator | Thursday 08 January 2026 00:56:02 +0000 (0:00:00.534) 0:00:04.608 ****** 2026-01-08 00:58:32.406860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.406868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.406979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.407010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.407028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.407034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.407039 | orchestrator | 2026-01-08 00:58:32.407044 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-08 00:58:32.407048 | orchestrator | Thursday 08 January 2026 00:56:04 +0000 (0:00:02.526) 0:00:07.134 ****** 2026-01-08 00:58:32.407053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:58:32.407065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:58:32.407070 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:58:32.407077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:58:32.407086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:58:32.407093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:58:32.407104 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:58:32.407116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:58:32.407124 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:58:32.407129 | orchestrator | 2026-01-08 00:58:32.407133 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-08 00:58:32.407138 | orchestrator | Thursday 08 January 2026 00:56:06 +0000 (0:00:01.529) 0:00:08.664 ****** 2026-01-08 00:58:32.407145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:58:32.407149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:58:32.407157 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:58:32.407162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:58:32.407170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:58:32.407177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:58:32.407181 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:58:32.407186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:58:32.407196 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:58:32.407201 | orchestrator | 2026-01-08 00:58:32.407205 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-08 00:58:32.407211 | orchestrator | Thursday 08 January 2026 00:56:07 +0000 (0:00:01.097) 0:00:09.761 ****** 2026-01-08 00:58:32.407293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.407308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.407319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.407327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.407341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.407357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.407363 | orchestrator | 2026-01-08 00:58:32.407368 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-08 00:58:32.407372 | orchestrator | Thursday 08 January 2026 00:56:10 +0000 (0:00:02.459) 0:00:12.221 ****** 2026-01-08 00:58:32.407377 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:58:32.407381 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:58:32.407386 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:58:32.407390 | orchestrator | 2026-01-08 00:58:32.407394 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-08 00:58:32.407399 | orchestrator | Thursday 08 January 2026 00:56:12 +0000 (0:00:02.809) 0:00:15.031 ****** 2026-01-08 00:58:32.407403 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:58:32.407407 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:58:32.407412 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:58:32.407416 | orchestrator | 2026-01-08 00:58:32.407420 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-01-08 00:58:32.407425 | orchestrator | Thursday 08 January 2026 00:56:14 +0000 (0:00:01.956) 0:00:16.987 ****** 2026-01-08 00:58:32.407429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.407437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.407442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 00:58:32.407452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.407458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.407466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-08 00:58:32.407471 | orchestrator | 2026-01-08 00:58:32.407475 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-01-08 00:58:32.407480 | orchestrator | Thursday 08 January 2026 00:56:17 +0000 (0:00:02.433) 0:00:19.421 ****** 2026-01-08 00:58:32.407484 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:58:32.407488 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:58:32.407493 | orchestrator | } 2026-01-08 00:58:32.407498 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:58:32.407502 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:58:32.407506 | orchestrator | } 2026-01-08 00:58:32.407511 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:58:32.407515 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:58:32.407519 | orchestrator | } 2026-01-08 00:58:32.407524 | orchestrator | 2026-01-08 00:58:32.407528 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:58:32.407535 | orchestrator | Thursday 08 January 2026 00:56:17 +0000 (0:00:00.349) 0:00:19.770 ****** 2026-01-08 00:58:32.407542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:58:32.407547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:58:32.407555 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:58:32.407559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:58:32.407567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:58:32.407572 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:58:32.407578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2025.1', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 00:58:32.407586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2025.1', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-08 00:58:32.407591 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:58:32.407595 | orchestrator | 2026-01-08 00:58:32.407600 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-08 00:58:32.407648 | orchestrator | Thursday 08 January 2026 00:56:18 +0000 (0:00:01.311) 0:00:21.082 ****** 2026-01-08 00:58:32.407654 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:58:32.407659 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:58:32.407663 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:58:32.407667 | orchestrator | 2026-01-08 00:58:32.407672 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-08 00:58:32.407676 | orchestrator | Thursday 08 January 2026 00:56:19 +0000 (0:00:00.404) 0:00:21.487 ****** 2026-01-08 00:58:32.407681 | orchestrator | 2026-01-08 00:58:32.407685 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-08 00:58:32.407689 | orchestrator | Thursday 08 January 2026 00:56:19 +0000 (0:00:00.066) 0:00:21.553 ****** 2026-01-08 00:58:32.407694 | orchestrator | 2026-01-08 00:58:32.407698 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-08 00:58:32.407703 | orchestrator | Thursday 08 January 2026 00:56:19 +0000 (0:00:00.069) 0:00:21.623 ****** 2026-01-08 00:58:32.407707 | orchestrator | 2026-01-08 00:58:32.407711 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-08 00:58:32.407716 | orchestrator | Thursday 08 January 2026 00:56:19 +0000 (0:00:00.084) 0:00:21.707 ****** 2026-01-08 00:58:32.407720 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:58:32.407724 | orchestrator | 2026-01-08 00:58:32.407729 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-08 00:58:32.407733 | orchestrator | Thursday 08 January 2026 00:56:19 +0000 (0:00:00.209) 0:00:21.917 ****** 2026-01-08 00:58:32.407738 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:58:32.407742 | orchestrator | 2026-01-08 00:58:32.407746 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-08 00:58:32.407751 | orchestrator | Thursday 08 January 2026 00:56:20 +0000 (0:00:00.311) 0:00:22.229 ****** 2026-01-08 00:58:32.407755 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:58:32.407759 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:58:32.407764 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:58:32.407768 | orchestrator | 2026-01-08 00:58:32.407772 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-08 00:58:32.407777 | orchestrator | Thursday 08 January 2026 00:57:15 +0000 (0:00:55.186) 0:01:17.415 ****** 2026-01-08 00:58:32.407781 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:58:32.407786 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:58:32.407794 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:58:32.407798 | orchestrator | 2026-01-08 00:58:32.407802 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-08 00:58:32.407807 | orchestrator | Thursday 08 January 2026 00:58:19 +0000 (0:01:04.125) 0:02:21.540 ****** 2026-01-08 00:58:32.407815 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:58:32.407819 | orchestrator | 2026-01-08 00:58:32.407824 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-08 00:58:32.407828 | orchestrator | Thursday 08 January 2026 00:58:19 +0000 (0:00:00.517) 0:02:22.058 ****** 2026-01-08 00:58:32.407832 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:58:32.407837 | orchestrator | 2026-01-08 00:58:32.407842 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-08 00:58:32.407846 | orchestrator | Thursday 08 January 2026 00:58:22 +0000 (0:00:02.393) 0:02:24.451 ****** 2026-01-08 00:58:32.407851 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:58:32.407855 | orchestrator | 2026-01-08 00:58:32.407861 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-08 00:58:32.407866 | orchestrator | Thursday 08 January 2026 00:58:24 +0000 (0:00:02.062) 0:02:26.514 ****** 2026-01-08 00:58:32.407870 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:58:32.407875 | orchestrator | 2026-01-08 00:58:32.407879 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-08 00:58:32.407883 | orchestrator | Thursday 08 January 2026 00:58:27 +0000 (0:00:02.867) 0:02:29.382 ****** 2026-01-08 00:58:32.407888 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:58:32.407892 | orchestrator | 2026-01-08 00:58:32.407896 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:58:32.407901 | orchestrator | testbed-node-0 : ok=19  changed=12  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 00:58:32.407906 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-08 00:58:32.407911 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-08 00:58:32.407915 | orchestrator | 2026-01-08 00:58:32.407919 | orchestrator | 2026-01-08 00:58:32.407924 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:58:32.407928 | orchestrator | Thursday 08 January 2026 00:58:29 +0000 (0:00:02.248) 0:02:31.630 ****** 2026-01-08 00:58:32.407933 | orchestrator | =============================================================================== 2026-01-08 00:58:32.407937 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 64.13s 2026-01-08 00:58:32.407941 | orchestrator | opensearch : Restart opensearch container ------------------------------ 55.19s 2026-01-08 00:58:32.407946 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.87s 2026-01-08 00:58:32.407950 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.81s 2026-01-08 00:58:32.407954 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.53s 2026-01-08 00:58:32.407959 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.46s 2026-01-08 00:58:32.407963 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.43s 2026-01-08 00:58:32.407967 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.39s 2026-01-08 00:58:32.407972 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.25s 2026-01-08 00:58:32.407976 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.06s 2026-01-08 00:58:32.408036 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.96s 2026-01-08 00:58:32.408043 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.76s 2026-01-08 00:58:32.408051 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.53s 2026-01-08 00:58:32.408055 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.31s 2026-01-08 00:58:32.408060 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.10s 2026-01-08 00:58:32.408064 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.75s 2026-01-08 00:58:32.408069 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-01-08 00:58:32.408073 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-01-08 00:58:32.408077 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-01-08 00:58:32.408082 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-01-08 00:58:32.408086 | orchestrator | 2026-01-08 00:58:32 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:32.408091 | orchestrator | 2026-01-08 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:35.448521 | orchestrator | 2026-01-08 00:58:35 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:35.449480 | orchestrator | 2026-01-08 00:58:35 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:35.449530 | orchestrator | 2026-01-08 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:38.501240 | orchestrator | 2026-01-08 00:58:38 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:38.506271 | orchestrator | 2026-01-08 00:58:38 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:38.506955 | orchestrator | 2026-01-08 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:41.552344 | orchestrator | 2026-01-08 00:58:41 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:41.554189 | orchestrator | 2026-01-08 00:58:41 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:41.554230 | orchestrator | 2026-01-08 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:44.596751 | orchestrator | 2026-01-08 00:58:44 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:44.597700 | orchestrator | 2026-01-08 00:58:44 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:44.597747 | orchestrator | 2026-01-08 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:47.651123 | orchestrator | 2026-01-08 00:58:47 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:47.651511 | orchestrator | 2026-01-08 00:58:47 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:47.651535 | orchestrator | 2026-01-08 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:50.698527 | orchestrator | 2026-01-08 00:58:50 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:50.701179 | orchestrator | 2026-01-08 00:58:50 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:50.701228 | orchestrator | 2026-01-08 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:53.743642 | orchestrator | 2026-01-08 00:58:53 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:53.745020 | orchestrator | 2026-01-08 00:58:53 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:53.745120 | orchestrator | 2026-01-08 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:56.793049 | orchestrator | 2026-01-08 00:58:56 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:56.797125 | orchestrator | 2026-01-08 00:58:56 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:56.797171 | orchestrator | 2026-01-08 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:58:59.840521 | orchestrator | 2026-01-08 00:58:59 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:58:59.842627 | orchestrator | 2026-01-08 00:58:59 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:58:59.843182 | orchestrator | 2026-01-08 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:02.900186 | orchestrator | 2026-01-08 00:59:02 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:02.901469 | orchestrator | 2026-01-08 00:59:02 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:59:02.901522 | orchestrator | 2026-01-08 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:05.951103 | orchestrator | 2026-01-08 00:59:05 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:05.954277 | orchestrator | 2026-01-08 00:59:05 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:59:05.954329 | orchestrator | 2026-01-08 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:08.993399 | orchestrator | 2026-01-08 00:59:08 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:08.997192 | orchestrator | 2026-01-08 00:59:08 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:59:08.997396 | orchestrator | 2026-01-08 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:12.050608 | orchestrator | 2026-01-08 00:59:12 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:12.052549 | orchestrator | 2026-01-08 00:59:12 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:59:12.052590 | orchestrator | 2026-01-08 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:15.098649 | orchestrator | 2026-01-08 00:59:15 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:15.100806 | orchestrator | 2026-01-08 00:59:15 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state STARTED 2026-01-08 00:59:15.100908 | orchestrator | 2026-01-08 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:18.148469 | orchestrator | 2026-01-08 00:59:18 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:18.151172 | orchestrator | 2026-01-08 00:59:18 | INFO  | Task 0f494656-66a7-489e-9c78-3e15cbfb6e5b is in state SUCCESS 2026-01-08 00:59:18.153290 | orchestrator | 2026-01-08 00:59:18.153348 | orchestrator | 2026-01-08 00:59:18.153356 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-08 00:59:18.153361 | orchestrator | 2026-01-08 00:59:18.153365 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-08 00:59:18.153379 | orchestrator | Thursday 08 January 2026 00:55:58 +0000 (0:00:00.093) 0:00:00.093 ****** 2026-01-08 00:59:18.153390 | orchestrator | ok: [localhost] => { 2026-01-08 00:59:18.153399 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-08 00:59:18.153406 | orchestrator | } 2026-01-08 00:59:18.153413 | orchestrator | 2026-01-08 00:59:18.153419 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-08 00:59:18.153439 | orchestrator | Thursday 08 January 2026 00:55:58 +0000 (0:00:00.060) 0:00:00.154 ****** 2026-01-08 00:59:18.153444 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-08 00:59:18.153449 | orchestrator | ...ignoring 2026-01-08 00:59:18.153453 | orchestrator | 2026-01-08 00:59:18.153457 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-08 00:59:18.153461 | orchestrator | Thursday 08 January 2026 00:56:00 +0000 (0:00:02.857) 0:00:03.012 ****** 2026-01-08 00:59:18.153465 | orchestrator | skipping: [localhost] 2026-01-08 00:59:18.153469 | orchestrator | 2026-01-08 00:59:18.153473 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-08 00:59:18.153477 | orchestrator | Thursday 08 January 2026 00:56:01 +0000 (0:00:00.064) 0:00:03.076 ****** 2026-01-08 00:59:18.153482 | orchestrator | ok: [localhost] 2026-01-08 00:59:18.153488 | orchestrator | 2026-01-08 00:59:18.153494 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 00:59:18.153506 | orchestrator | 2026-01-08 00:59:18.153512 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 00:59:18.153517 | orchestrator | Thursday 08 January 2026 00:56:01 +0000 (0:00:00.169) 0:00:03.246 ****** 2026-01-08 00:59:18.153524 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.153530 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:59:18.153536 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:59:18.153543 | orchestrator | 2026-01-08 00:59:18.153549 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 00:59:18.153555 | orchestrator | Thursday 08 January 2026 00:56:01 +0000 (0:00:00.329) 0:00:03.575 ****** 2026-01-08 00:59:18.153561 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-08 00:59:18.153568 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-08 00:59:18.153575 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-08 00:59:18.153581 | orchestrator | 2026-01-08 00:59:18.153588 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-08 00:59:18.153595 | orchestrator | 2026-01-08 00:59:18.153601 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-08 00:59:18.153608 | orchestrator | Thursday 08 January 2026 00:56:02 +0000 (0:00:00.585) 0:00:04.161 ****** 2026-01-08 00:59:18.153615 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-08 00:59:18.153622 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-08 00:59:18.153626 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-08 00:59:18.153630 | orchestrator | 2026-01-08 00:59:18.153634 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-08 00:59:18.153638 | orchestrator | Thursday 08 January 2026 00:56:02 +0000 (0:00:00.373) 0:00:04.534 ****** 2026-01-08 00:59:18.153642 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:59:18.153646 | orchestrator | 2026-01-08 00:59:18.153650 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-08 00:59:18.153655 | orchestrator | Thursday 08 January 2026 00:56:03 +0000 (0:00:00.630) 0:00:05.164 ****** 2026-01-08 00:59:18.153685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-08 00:59:18.153708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-08 00:59:18.153717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-08 00:59:18.153729 | orchestrator | 2026-01-08 00:59:18.153740 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-08 00:59:18.153747 | orchestrator | Thursday 08 January 2026 00:56:06 +0000 (0:00:03.108) 0:00:08.273 ****** 2026-01-08 00:59:18.153754 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.153759 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.153763 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.153767 | orchestrator | 2026-01-08 00:59:18.153773 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-08 00:59:18.153777 | orchestrator | Thursday 08 January 2026 00:56:06 +0000 (0:00:00.632) 0:00:08.906 ****** 2026-01-08 00:59:18.153781 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.153785 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.153789 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.153793 | orchestrator | 2026-01-08 00:59:18.153797 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-08 00:59:18.153801 | orchestrator | Thursday 08 January 2026 00:56:08 +0000 (0:00:01.592) 0:00:10.499 ****** 2026-01-08 00:59:18.153810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-08 00:59:18.153817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-08 00:59:18.153827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-08 00:59:18.153832 | orchestrator | 2026-01-08 00:59:18.153836 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-08 00:59:18.153840 | orchestrator | Thursday 08 January 2026 00:56:12 +0000 (0:00:04.049) 0:00:14.549 ****** 2026-01-08 00:59:18.153844 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.153848 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.153852 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.153906 | orchestrator | 2026-01-08 00:59:18.153917 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-08 00:59:18.153927 | orchestrator | Thursday 08 January 2026 00:56:13 +0000 (0:00:01.205) 0:00:15.754 ****** 2026-01-08 00:59:18.153933 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.153940 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:59:18.154283 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:59:18.154300 | orchestrator | 2026-01-08 00:59:18.154306 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-08 00:59:18.154312 | orchestrator | Thursday 08 January 2026 00:56:17 +0000 (0:00:04.228) 0:00:19.982 ****** 2026-01-08 00:59:18.154317 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:59:18.154322 | orchestrator | 2026-01-08 00:59:18.154327 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-08 00:59:18.154332 | orchestrator | Thursday 08 January 2026 00:56:18 +0000 (0:00:00.553) 0:00:20.535 ****** 2026-01-08 00:59:18.154356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154364 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.154370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154380 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.154391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154397 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154402 | orchestrator | 2026-01-08 00:59:18.154408 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-08 00:59:18.154413 | orchestrator | Thursday 08 January 2026 00:56:22 +0000 (0:00:03.760) 0:00:24.295 ****** 2026-01-08 00:59:18.154419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154427 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154462 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.154470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154479 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.154484 | orchestrator | 2026-01-08 00:59:18.154490 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-08 00:59:18.154497 | orchestrator | Thursday 08 January 2026 00:56:24 +0000 (0:00:02.726) 0:00:27.021 ****** 2026-01-08 00:59:18.154506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154515 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.154536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154553 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.154561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154573 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154582 | orchestrator | 2026-01-08 00:59:18.154590 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-01-08 00:59:18.154597 | orchestrator | Thursday 08 January 2026 00:56:27 +0000 (0:00:02.756) 0:00:29.778 ****** 2026-01-08 00:59:18.154616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-08 00:59:18.154632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-08 00:59:18.154649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-08 00:59:18.154656 | orchestrator | 2026-01-08 00:59:18.154662 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-01-08 00:59:18.154667 | orchestrator | Thursday 08 January 2026 00:56:32 +0000 (0:00:04.367) 0:00:34.146 ****** 2026-01-08 00:59:18.154672 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 00:59:18.154687 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:59:18.154700 | orchestrator | } 2026-01-08 00:59:18.154708 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 00:59:18.154716 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:59:18.154725 | orchestrator | } 2026-01-08 00:59:18.154732 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 00:59:18.154736 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 00:59:18.154741 | orchestrator | } 2026-01-08 00:59:18.154746 | orchestrator | 2026-01-08 00:59:18.154752 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 00:59:18.154757 | orchestrator | Thursday 08 January 2026 00:56:32 +0000 (0:00:00.560) 0:00:34.706 ****** 2026-01-08 00:59:18.154762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154768 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154794 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.154799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.154805 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.154810 | orchestrator | 2026-01-08 00:59:18.154815 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-01-08 00:59:18.154820 | orchestrator | Thursday 08 January 2026 00:56:35 +0000 (0:00:03.072) 0:00:37.778 ****** 2026-01-08 00:59:18.154825 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154830 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.154835 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.154840 | orchestrator | 2026-01-08 00:59:18.154845 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-01-08 00:59:18.154849 | orchestrator | Thursday 08 January 2026 00:56:36 +0000 (0:00:00.407) 0:00:38.186 ****** 2026-01-08 00:59:18.154854 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154859 | orchestrator | 2026-01-08 00:59:18.154864 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-01-08 00:59:18.154869 | orchestrator | Thursday 08 January 2026 00:56:36 +0000 (0:00:00.173) 0:00:38.360 ****** 2026-01-08 00:59:18.154874 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154878 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.154883 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.154888 | orchestrator | 2026-01-08 00:59:18.154893 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-01-08 00:59:18.154898 | orchestrator | Thursday 08 January 2026 00:56:36 +0000 (0:00:00.656) 0:00:39.017 ****** 2026-01-08 00:59:18.154906 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154911 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.154916 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.154921 | orchestrator | 2026-01-08 00:59:18.154930 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-01-08 00:59:18.154937 | orchestrator | Thursday 08 January 2026 00:56:37 +0000 (0:00:00.490) 0:00:39.507 ****** 2026-01-08 00:59:18.154942 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154947 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.154952 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.154957 | orchestrator | 2026-01-08 00:59:18.154962 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-01-08 00:59:18.154967 | orchestrator | Thursday 08 January 2026 00:56:37 +0000 (0:00:00.321) 0:00:39.828 ****** 2026-01-08 00:59:18.154976 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.154988 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.154997 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155005 | orchestrator | 2026-01-08 00:59:18.155013 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-01-08 00:59:18.155036 | orchestrator | Thursday 08 January 2026 00:56:38 +0000 (0:00:00.351) 0:00:40.180 ****** 2026-01-08 00:59:18.155046 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155054 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155062 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155070 | orchestrator | 2026-01-08 00:59:18.155078 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-01-08 00:59:18.155086 | orchestrator | Thursday 08 January 2026 00:56:38 +0000 (0:00:00.554) 0:00:40.734 ****** 2026-01-08 00:59:18.155095 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155104 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155113 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155121 | orchestrator | 2026-01-08 00:59:18.155130 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-01-08 00:59:18.155139 | orchestrator | Thursday 08 January 2026 00:56:39 +0000 (0:00:00.329) 0:00:41.064 ****** 2026-01-08 00:59:18.155148 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-08 00:59:18.155157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-08 00:59:18.155166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-08 00:59:18.155175 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155184 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-08 00:59:18.155193 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-08 00:59:18.155201 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-08 00:59:18.155211 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155219 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-08 00:59:18.155228 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-08 00:59:18.155235 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-08 00:59:18.155244 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155252 | orchestrator | 2026-01-08 00:59:18.155260 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-01-08 00:59:18.155269 | orchestrator | Thursday 08 January 2026 00:56:39 +0000 (0:00:00.388) 0:00:41.452 ****** 2026-01-08 00:59:18.155276 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155281 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155286 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155291 | orchestrator | 2026-01-08 00:59:18.155296 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-01-08 00:59:18.155301 | orchestrator | Thursday 08 January 2026 00:56:39 +0000 (0:00:00.368) 0:00:41.821 ****** 2026-01-08 00:59:18.155306 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155311 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155316 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155320 | orchestrator | 2026-01-08 00:59:18.155325 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-01-08 00:59:18.155338 | orchestrator | Thursday 08 January 2026 00:56:40 +0000 (0:00:00.381) 0:00:42.202 ****** 2026-01-08 00:59:18.155343 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155347 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155352 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155357 | orchestrator | 2026-01-08 00:59:18.155362 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-01-08 00:59:18.155367 | orchestrator | Thursday 08 January 2026 00:56:40 +0000 (0:00:00.624) 0:00:42.827 ****** 2026-01-08 00:59:18.155373 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155378 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155383 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155388 | orchestrator | 2026-01-08 00:59:18.155393 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-01-08 00:59:18.155398 | orchestrator | Thursday 08 January 2026 00:56:41 +0000 (0:00:00.341) 0:00:43.169 ****** 2026-01-08 00:59:18.155402 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155407 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155412 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155417 | orchestrator | 2026-01-08 00:59:18.155422 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-01-08 00:59:18.155427 | orchestrator | Thursday 08 January 2026 00:56:41 +0000 (0:00:00.315) 0:00:43.485 ****** 2026-01-08 00:59:18.155432 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155436 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155443 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155451 | orchestrator | 2026-01-08 00:59:18.155458 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-01-08 00:59:18.155471 | orchestrator | Thursday 08 January 2026 00:56:41 +0000 (0:00:00.347) 0:00:43.833 ****** 2026-01-08 00:59:18.155480 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155487 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155495 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155503 | orchestrator | 2026-01-08 00:59:18.155510 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-01-08 00:59:18.155525 | orchestrator | Thursday 08 January 2026 00:56:42 +0000 (0:00:00.597) 0:00:44.430 ****** 2026-01-08 00:59:18.155532 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155540 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155547 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155555 | orchestrator | 2026-01-08 00:59:18.155568 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-01-08 00:59:18.155575 | orchestrator | Thursday 08 January 2026 00:56:42 +0000 (0:00:00.331) 0:00:44.761 ****** 2026-01-08 00:59:18.155585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.155600 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.155614 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.155637 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155642 | orchestrator | 2026-01-08 00:59:18.155647 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-01-08 00:59:18.155652 | orchestrator | Thursday 08 January 2026 00:56:44 +0000 (0:00:02.269) 0:00:47.031 ****** 2026-01-08 00:59:18.155657 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155662 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155667 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155672 | orchestrator | 2026-01-08 00:59:18.155681 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-01-08 00:59:18.155693 | orchestrator | Thursday 08 January 2026 00:56:45 +0000 (0:00:00.356) 0:00:47.387 ****** 2026-01-08 00:59:18.155704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.155717 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.155745 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2025.1', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-08 00:59:18.155764 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155770 | orchestrator | 2026-01-08 00:59:18.155775 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-01-08 00:59:18.155780 | orchestrator | Thursday 08 January 2026 00:56:47 +0000 (0:00:02.331) 0:00:49.718 ****** 2026-01-08 00:59:18.155785 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155790 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155795 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155800 | orchestrator | 2026-01-08 00:59:18.155804 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-08 00:59:18.155812 | orchestrator | Thursday 08 January 2026 00:56:48 +0000 (0:00:00.332) 0:00:50.051 ****** 2026-01-08 00:59:18.155818 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155822 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155827 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155832 | orchestrator | 2026-01-08 00:59:18.155839 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-08 00:59:18.155844 | orchestrator | Thursday 08 January 2026 00:56:48 +0000 (0:00:00.314) 0:00:50.366 ****** 2026-01-08 00:59:18.155849 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155854 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155859 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155864 | orchestrator | 2026-01-08 00:59:18.155873 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-08 00:59:18.155878 | orchestrator | Thursday 08 January 2026 00:56:48 +0000 (0:00:00.329) 0:00:50.695 ****** 2026-01-08 00:59:18.155883 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155887 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155892 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155897 | orchestrator | 2026-01-08 00:59:18.155902 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-08 00:59:18.155907 | orchestrator | Thursday 08 January 2026 00:56:49 +0000 (0:00:00.765) 0:00:51.461 ****** 2026-01-08 00:59:18.155912 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.155916 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.155921 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.155926 | orchestrator | 2026-01-08 00:59:18.155931 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-08 00:59:18.155936 | orchestrator | Thursday 08 January 2026 00:56:49 +0000 (0:00:00.318) 0:00:51.780 ****** 2026-01-08 00:59:18.155941 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.155946 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:59:18.155950 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:59:18.155955 | orchestrator | 2026-01-08 00:59:18.155960 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-08 00:59:18.155965 | orchestrator | Thursday 08 January 2026 00:56:50 +0000 (0:00:00.981) 0:00:52.761 ****** 2026-01-08 00:59:18.155970 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.155975 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:59:18.155980 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:59:18.155985 | orchestrator | 2026-01-08 00:59:18.155989 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-08 00:59:18.155995 | orchestrator | Thursday 08 January 2026 00:56:51 +0000 (0:00:00.587) 0:00:53.349 ****** 2026-01-08 00:59:18.156000 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.156004 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:59:18.156009 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:59:18.156014 | orchestrator | 2026-01-08 00:59:18.156032 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-08 00:59:18.156038 | orchestrator | Thursday 08 January 2026 00:56:51 +0000 (0:00:00.366) 0:00:53.715 ****** 2026-01-08 00:59:18.156045 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-08 00:59:18.156058 | orchestrator | ...ignoring 2026-01-08 00:59:18.156068 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-08 00:59:18.156075 | orchestrator | ...ignoring 2026-01-08 00:59:18.156085 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-08 00:59:18.156093 | orchestrator | ...ignoring 2026-01-08 00:59:18.156101 | orchestrator | 2026-01-08 00:59:18.156110 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-08 00:59:18.156118 | orchestrator | Thursday 08 January 2026 00:57:02 +0000 (0:00:10.837) 0:01:04.553 ****** 2026-01-08 00:59:18.156127 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.156135 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:59:18.156142 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:59:18.156151 | orchestrator | 2026-01-08 00:59:18.156158 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-08 00:59:18.156167 | orchestrator | Thursday 08 January 2026 00:57:02 +0000 (0:00:00.355) 0:01:04.909 ****** 2026-01-08 00:59:18.156175 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.156183 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.156191 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.156200 | orchestrator | 2026-01-08 00:59:18.156208 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-08 00:59:18.156224 | orchestrator | Thursday 08 January 2026 00:57:03 +0000 (0:00:00.515) 0:01:05.424 ****** 2026-01-08 00:59:18.156233 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.156241 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.156250 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.156259 | orchestrator | 2026-01-08 00:59:18.156268 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-08 00:59:18.156277 | orchestrator | Thursday 08 January 2026 00:57:03 +0000 (0:00:00.320) 0:01:05.745 ****** 2026-01-08 00:59:18.156286 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.156294 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.156304 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.156311 | orchestrator | 2026-01-08 00:59:18.156320 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-08 00:59:18.156328 | orchestrator | Thursday 08 January 2026 00:57:04 +0000 (0:00:00.323) 0:01:06.068 ****** 2026-01-08 00:59:18.156336 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.156344 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:59:18.156351 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:59:18.156360 | orchestrator | 2026-01-08 00:59:18.156369 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-08 00:59:18.156378 | orchestrator | Thursday 08 January 2026 00:57:04 +0000 (0:00:00.319) 0:01:06.388 ****** 2026-01-08 00:59:18.156386 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.156402 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.156407 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.156412 | orchestrator | 2026-01-08 00:59:18.156417 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-08 00:59:18.156422 | orchestrator | Thursday 08 January 2026 00:57:04 +0000 (0:00:00.519) 0:01:06.908 ****** 2026-01-08 00:59:18.156431 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.156436 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.156441 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-08 00:59:18.156446 | orchestrator | 2026-01-08 00:59:18.156451 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-08 00:59:18.156456 | orchestrator | Thursday 08 January 2026 00:57:05 +0000 (0:00:00.450) 0:01:07.358 ****** 2026-01-08 00:59:18.156461 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.156465 | orchestrator | 2026-01-08 00:59:18.156470 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-08 00:59:18.156475 | orchestrator | Thursday 08 January 2026 00:57:15 +0000 (0:00:10.449) 0:01:17.808 ****** 2026-01-08 00:59:18.156480 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.156485 | orchestrator | 2026-01-08 00:59:18.156490 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-08 00:59:18.156495 | orchestrator | Thursday 08 January 2026 00:57:15 +0000 (0:00:00.120) 0:01:17.929 ****** 2026-01-08 00:59:18.156500 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.156505 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.156509 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.156514 | orchestrator | 2026-01-08 00:59:18.156519 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-08 00:59:18.156524 | orchestrator | Thursday 08 January 2026 00:57:16 +0000 (0:00:00.937) 0:01:18.867 ****** 2026-01-08 00:59:18.156529 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.156534 | orchestrator | 2026-01-08 00:59:18.156539 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-08 00:59:18.156544 | orchestrator | Thursday 08 January 2026 00:57:24 +0000 (0:00:08.094) 0:01:26.962 ****** 2026-01-08 00:59:18.156549 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.156554 | orchestrator | 2026-01-08 00:59:18.156559 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-08 00:59:18.156570 | orchestrator | Thursday 08 January 2026 00:57:26 +0000 (0:00:01.609) 0:01:28.571 ****** 2026-01-08 00:59:18.156575 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.156580 | orchestrator | 2026-01-08 00:59:18.156584 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-08 00:59:18.156589 | orchestrator | Thursday 08 January 2026 00:57:28 +0000 (0:00:02.072) 0:01:30.644 ****** 2026-01-08 00:59:18.156594 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.156599 | orchestrator | 2026-01-08 00:59:18.156604 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-08 00:59:18.156609 | orchestrator | Thursday 08 January 2026 00:57:28 +0000 (0:00:00.139) 0:01:30.783 ****** 2026-01-08 00:59:18.156614 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.156619 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.156624 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.156628 | orchestrator | 2026-01-08 00:59:18.156633 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-08 00:59:18.156638 | orchestrator | Thursday 08 January 2026 00:57:29 +0000 (0:00:00.324) 0:01:31.107 ****** 2026-01-08 00:59:18.156645 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.156656 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-08 00:59:18.156666 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:59:18.156674 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:59:18.156683 | orchestrator | 2026-01-08 00:59:18.156691 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-08 00:59:18.156698 | orchestrator | skipping: no hosts matched 2026-01-08 00:59:18.156706 | orchestrator | 2026-01-08 00:59:18.156713 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-08 00:59:18.156720 | orchestrator | 2026-01-08 00:59:18.156729 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-08 00:59:18.156736 | orchestrator | Thursday 08 January 2026 00:57:29 +0000 (0:00:00.583) 0:01:31.691 ****** 2026-01-08 00:59:18.156744 | orchestrator | changed: [testbed-node-1] 2026-01-08 00:59:18.156751 | orchestrator | 2026-01-08 00:59:18.156759 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-08 00:59:18.156768 | orchestrator | Thursday 08 January 2026 00:57:50 +0000 (0:00:21.338) 0:01:53.030 ****** 2026-01-08 00:59:18.156776 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:59:18.156784 | orchestrator | 2026-01-08 00:59:18.156792 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-08 00:59:18.156800 | orchestrator | Thursday 08 January 2026 00:58:01 +0000 (0:00:10.621) 0:02:03.651 ****** 2026-01-08 00:59:18.156809 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:59:18.156817 | orchestrator | 2026-01-08 00:59:18.156825 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-08 00:59:18.156830 | orchestrator | 2026-01-08 00:59:18.156835 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-08 00:59:18.156840 | orchestrator | Thursday 08 January 2026 00:58:03 +0000 (0:00:02.070) 0:02:05.722 ****** 2026-01-08 00:59:18.156845 | orchestrator | changed: [testbed-node-2] 2026-01-08 00:59:18.156850 | orchestrator | 2026-01-08 00:59:18.156855 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-08 00:59:18.156860 | orchestrator | Thursday 08 January 2026 00:58:19 +0000 (0:00:16.137) 0:02:21.859 ****** 2026-01-08 00:59:18.156865 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:59:18.156870 | orchestrator | 2026-01-08 00:59:18.156875 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-08 00:59:18.156880 | orchestrator | Thursday 08 January 2026 00:58:34 +0000 (0:00:14.570) 0:02:36.430 ****** 2026-01-08 00:59:18.156884 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:59:18.156889 | orchestrator | 2026-01-08 00:59:18.156894 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-08 00:59:18.156900 | orchestrator | 2026-01-08 00:59:18.156911 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-08 00:59:18.156925 | orchestrator | Thursday 08 January 2026 00:58:37 +0000 (0:00:02.673) 0:02:39.104 ****** 2026-01-08 00:59:18.156931 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.156936 | orchestrator | 2026-01-08 00:59:18.156941 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-08 00:59:18.156950 | orchestrator | Thursday 08 January 2026 00:58:54 +0000 (0:00:17.265) 0:02:56.369 ****** 2026-01-08 00:59:18.156955 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.156960 | orchestrator | 2026-01-08 00:59:18.156965 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-08 00:59:18.156970 | orchestrator | Thursday 08 January 2026 00:58:54 +0000 (0:00:00.554) 0:02:56.923 ****** 2026-01-08 00:59:18.156975 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.156981 | orchestrator | 2026-01-08 00:59:18.156986 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-08 00:59:18.156991 | orchestrator | 2026-01-08 00:59:18.156995 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-08 00:59:18.157000 | orchestrator | Thursday 08 January 2026 00:58:57 +0000 (0:00:02.263) 0:02:59.187 ****** 2026-01-08 00:59:18.157005 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 00:59:18.157010 | orchestrator | 2026-01-08 00:59:18.157015 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-08 00:59:18.157081 | orchestrator | Thursday 08 January 2026 00:58:57 +0000 (0:00:00.573) 0:02:59.760 ****** 2026-01-08 00:59:18.157095 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.157104 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.157111 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.157116 | orchestrator | 2026-01-08 00:59:18.157121 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-08 00:59:18.157126 | orchestrator | Thursday 08 January 2026 00:58:59 +0000 (0:00:02.048) 0:03:01.809 ****** 2026-01-08 00:59:18.157131 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.157136 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.157141 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.157146 | orchestrator | 2026-01-08 00:59:18.157151 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-08 00:59:18.157155 | orchestrator | Thursday 08 January 2026 00:59:01 +0000 (0:00:02.212) 0:03:04.021 ****** 2026-01-08 00:59:18.157160 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.157165 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.157170 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.157175 | orchestrator | 2026-01-08 00:59:18.157180 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-08 00:59:18.157185 | orchestrator | Thursday 08 January 2026 00:59:04 +0000 (0:00:02.636) 0:03:06.658 ****** 2026-01-08 00:59:18.157189 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.157194 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.157199 | orchestrator | changed: [testbed-node-0] 2026-01-08 00:59:18.157204 | orchestrator | 2026-01-08 00:59:18.157209 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-08 00:59:18.157214 | orchestrator | Thursday 08 January 2026 00:59:07 +0000 (0:00:02.407) 0:03:09.065 ****** 2026-01-08 00:59:18.157219 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:59:18.157224 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.157229 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:59:18.157234 | orchestrator | 2026-01-08 00:59:18.157239 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-08 00:59:18.157244 | orchestrator | Thursday 08 January 2026 00:59:11 +0000 (0:00:04.437) 0:03:13.503 ****** 2026-01-08 00:59:18.157249 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.157254 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.157259 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.157264 | orchestrator | 2026-01-08 00:59:18.157268 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-08 00:59:18.157281 | orchestrator | Thursday 08 January 2026 00:59:13 +0000 (0:00:02.478) 0:03:15.982 ****** 2026-01-08 00:59:18.157286 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.157291 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.157295 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.157300 | orchestrator | 2026-01-08 00:59:18.157305 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-08 00:59:18.157310 | orchestrator | Thursday 08 January 2026 00:59:14 +0000 (0:00:00.536) 0:03:16.518 ****** 2026-01-08 00:59:18.157315 | orchestrator | ok: [testbed-node-0] 2026-01-08 00:59:18.157321 | orchestrator | ok: [testbed-node-1] 2026-01-08 00:59:18.157326 | orchestrator | ok: [testbed-node-2] 2026-01-08 00:59:18.157331 | orchestrator | 2026-01-08 00:59:18.157336 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-08 00:59:18.157341 | orchestrator | Thursday 08 January 2026 00:59:17 +0000 (0:00:02.741) 0:03:19.260 ****** 2026-01-08 00:59:18.157347 | orchestrator | skipping: [testbed-node-0] 2026-01-08 00:59:18.157351 | orchestrator | skipping: [testbed-node-1] 2026-01-08 00:59:18.157357 | orchestrator | skipping: [testbed-node-2] 2026-01-08 00:59:18.157362 | orchestrator | 2026-01-08 00:59:18.157367 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:59:18.157373 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-08 00:59:18.157378 | orchestrator | testbed-node-0 : ok=36  changed=17  unreachable=0 failed=0 skipped=39  rescued=0 ignored=1  2026-01-08 00:59:18.157384 | orchestrator | testbed-node-1 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-08 00:59:18.157389 | orchestrator | testbed-node-2 : ok=22  changed=8  unreachable=0 failed=0 skipped=45  rescued=0 ignored=1  2026-01-08 00:59:18.157394 | orchestrator | 2026-01-08 00:59:18.157399 | orchestrator | 2026-01-08 00:59:18.157410 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:59:18.157415 | orchestrator | Thursday 08 January 2026 00:59:17 +0000 (0:00:00.442) 0:03:19.702 ****** 2026-01-08 00:59:18.157420 | orchestrator | =============================================================================== 2026-01-08 00:59:18.157428 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.48s 2026-01-08 00:59:18.157433 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 25.19s 2026-01-08 00:59:18.157438 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.27s 2026-01-08 00:59:18.157443 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.84s 2026-01-08 00:59:18.157448 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.45s 2026-01-08 00:59:18.157453 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.09s 2026-01-08 00:59:18.157458 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.74s 2026-01-08 00:59:18.157463 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.44s 2026-01-08 00:59:18.157468 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.37s 2026-01-08 00:59:18.157473 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.23s 2026-01-08 00:59:18.157479 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.05s 2026-01-08 00:59:18.157484 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.76s 2026-01-08 00:59:18.157489 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.11s 2026-01-08 00:59:18.157494 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.07s 2026-01-08 00:59:18.157502 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2026-01-08 00:59:18.157508 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.76s 2026-01-08 00:59:18.157513 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.74s 2026-01-08 00:59:18.157518 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.73s 2026-01-08 00:59:18.157522 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.64s 2026-01-08 00:59:18.157527 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 2.48s 2026-01-08 00:59:18.157532 | orchestrator | 2026-01-08 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:21.213016 | orchestrator | 2026-01-08 00:59:21 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:21.216385 | orchestrator | 2026-01-08 00:59:21 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:21.218872 | orchestrator | 2026-01-08 00:59:21 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:21.218911 | orchestrator | 2026-01-08 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:24.263526 | orchestrator | 2026-01-08 00:59:24 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:24.263806 | orchestrator | 2026-01-08 00:59:24 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:24.265143 | orchestrator | 2026-01-08 00:59:24 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:24.265227 | orchestrator | 2026-01-08 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:27.302984 | orchestrator | 2026-01-08 00:59:27 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:27.304684 | orchestrator | 2026-01-08 00:59:27 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:27.306698 | orchestrator | 2026-01-08 00:59:27 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:27.306761 | orchestrator | 2026-01-08 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:30.349376 | orchestrator | 2026-01-08 00:59:30 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:30.349994 | orchestrator | 2026-01-08 00:59:30 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:30.350984 | orchestrator | 2026-01-08 00:59:30 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:30.351036 | orchestrator | 2026-01-08 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:33.390694 | orchestrator | 2026-01-08 00:59:33 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:33.390788 | orchestrator | 2026-01-08 00:59:33 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:33.391788 | orchestrator | 2026-01-08 00:59:33 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:33.391886 | orchestrator | 2026-01-08 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:36.432063 | orchestrator | 2026-01-08 00:59:36 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:36.434915 | orchestrator | 2026-01-08 00:59:36 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:36.436792 | orchestrator | 2026-01-08 00:59:36 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:36.436879 | orchestrator | 2026-01-08 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:39.482394 | orchestrator | 2026-01-08 00:59:39 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:39.483913 | orchestrator | 2026-01-08 00:59:39 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:39.484603 | orchestrator | 2026-01-08 00:59:39 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:39.485120 | orchestrator | 2026-01-08 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:42.520763 | orchestrator | 2026-01-08 00:59:42 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:42.521777 | orchestrator | 2026-01-08 00:59:42 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:42.522854 | orchestrator | 2026-01-08 00:59:42 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:42.522884 | orchestrator | 2026-01-08 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:45.563962 | orchestrator | 2026-01-08 00:59:45 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:45.564434 | orchestrator | 2026-01-08 00:59:45 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:45.565331 | orchestrator | 2026-01-08 00:59:45 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:45.565381 | orchestrator | 2026-01-08 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:48.598184 | orchestrator | 2026-01-08 00:59:48 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:48.598632 | orchestrator | 2026-01-08 00:59:48 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:48.602632 | orchestrator | 2026-01-08 00:59:48 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:48.602701 | orchestrator | 2026-01-08 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:51.642799 | orchestrator | 2026-01-08 00:59:51 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state STARTED 2026-01-08 00:59:51.644972 | orchestrator | 2026-01-08 00:59:51 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:51.646543 | orchestrator | 2026-01-08 00:59:51 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:51.646593 | orchestrator | 2026-01-08 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:54.676899 | orchestrator | 2026-01-08 00:59:54.676957 | orchestrator | 2026-01-08 00:59:54 | INFO  | Task c38eeea8-c410-4b31-a6cc-a4d3f8ad6a81 is in state SUCCESS 2026-01-08 00:59:54.678419 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-08 00:59:54.678472 | orchestrator | 2.16.14 2026-01-08 00:59:54.678481 | orchestrator | 2026-01-08 00:59:54.678488 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-08 00:59:54.678495 | orchestrator | 2026-01-08 00:59:54.678501 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-08 00:59:54.678516 | orchestrator | Thursday 08 January 2026 00:57:48 +0000 (0:00:00.626) 0:00:00.626 ****** 2026-01-08 00:59:54.678522 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:59:54.678530 | orchestrator | 2026-01-08 00:59:54.678537 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-08 00:59:54.678543 | orchestrator | Thursday 08 January 2026 00:57:48 +0000 (0:00:00.644) 0:00:01.270 ****** 2026-01-08 00:59:54.678565 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.678571 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.678576 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.678581 | orchestrator | 2026-01-08 00:59:54.678587 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-08 00:59:54.678592 | orchestrator | Thursday 08 January 2026 00:57:49 +0000 (0:00:00.737) 0:00:02.007 ****** 2026-01-08 00:59:54.678598 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.678603 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.678608 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.678614 | orchestrator | 2026-01-08 00:59:54.678619 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-08 00:59:54.678670 | orchestrator | Thursday 08 January 2026 00:57:49 +0000 (0:00:00.285) 0:00:02.293 ****** 2026-01-08 00:59:54.678676 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.678681 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.678686 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.678732 | orchestrator | 2026-01-08 00:59:54.678738 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-08 00:59:54.678942 | orchestrator | Thursday 08 January 2026 00:57:50 +0000 (0:00:00.918) 0:00:03.211 ****** 2026-01-08 00:59:54.678960 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.678968 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.678975 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.678983 | orchestrator | 2026-01-08 00:59:54.678990 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-08 00:59:54.678997 | orchestrator | Thursday 08 January 2026 00:57:51 +0000 (0:00:00.332) 0:00:03.544 ****** 2026-01-08 00:59:54.679004 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.679011 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.679018 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.679026 | orchestrator | 2026-01-08 00:59:54.679034 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-08 00:59:54.679041 | orchestrator | Thursday 08 January 2026 00:57:51 +0000 (0:00:00.330) 0:00:03.874 ****** 2026-01-08 00:59:54.679049 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.679056 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.679062 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.679069 | orchestrator | 2026-01-08 00:59:54.679076 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-08 00:59:54.679083 | orchestrator | Thursday 08 January 2026 00:57:51 +0000 (0:00:00.307) 0:00:04.181 ****** 2026-01-08 00:59:54.679090 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.679097 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.679104 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.679111 | orchestrator | 2026-01-08 00:59:54.679118 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-08 00:59:54.679124 | orchestrator | Thursday 08 January 2026 00:57:52 +0000 (0:00:00.516) 0:00:04.698 ****** 2026-01-08 00:59:54.679131 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.679138 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.679144 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.679150 | orchestrator | 2026-01-08 00:59:54.679157 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-08 00:59:54.679197 | orchestrator | Thursday 08 January 2026 00:57:52 +0000 (0:00:00.304) 0:00:05.003 ****** 2026-01-08 00:59:54.679204 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-08 00:59:54.679210 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:59:54.679237 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:59:54.679283 | orchestrator | 2026-01-08 00:59:54.679289 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-08 00:59:54.679323 | orchestrator | Thursday 08 January 2026 00:57:53 +0000 (0:00:00.660) 0:00:05.663 ****** 2026-01-08 00:59:54.679339 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.679346 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.679352 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.679358 | orchestrator | 2026-01-08 00:59:54.679364 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-08 00:59:54.679370 | orchestrator | Thursday 08 January 2026 00:57:53 +0000 (0:00:00.419) 0:00:06.083 ****** 2026-01-08 00:59:54.679377 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-08 00:59:54.679383 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:59:54.679389 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:59:54.679396 | orchestrator | 2026-01-08 00:59:54.679402 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-08 00:59:54.679408 | orchestrator | Thursday 08 January 2026 00:57:55 +0000 (0:00:01.999) 0:00:08.083 ****** 2026-01-08 00:59:54.679414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-08 00:59:54.679421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-08 00:59:54.679427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-08 00:59:54.679433 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.679439 | orchestrator | 2026-01-08 00:59:54.679458 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-08 00:59:54.679464 | orchestrator | Thursday 08 January 2026 00:57:56 +0000 (0:00:00.666) 0:00:08.750 ****** 2026-01-08 00:59:54.679472 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.679480 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.679661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.679672 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.679678 | orchestrator | 2026-01-08 00:59:54.679684 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-08 00:59:54.679690 | orchestrator | Thursday 08 January 2026 00:57:57 +0000 (0:00:00.816) 0:00:09.566 ****** 2026-01-08 00:59:54.679704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.679711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.679718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.679731 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.679737 | orchestrator | 2026-01-08 00:59:54.679743 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-08 00:59:54.679749 | orchestrator | Thursday 08 January 2026 00:57:57 +0000 (0:00:00.338) 0:00:09.905 ****** 2026-01-08 00:59:54.679755 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd4d6f549a26f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-08 00:57:54.233003', 'end': '2026-01-08 00:57:54.264278', 'delta': '0:00:00.031275', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d4d6f549a26f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-08 00:59:54.679763 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2ac7f7c19772', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-08 00:57:54.889731', 'end': '2026-01-08 00:57:54.923723', 'delta': '0:00:00.033992', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ac7f7c19772'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-08 00:59:54.679791 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd1f63bacaf74', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-08 00:57:55.387621', 'end': '2026-01-08 00:57:55.411878', 'delta': '0:00:00.024257', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d1f63bacaf74'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-08 00:59:54.679798 | orchestrator | 2026-01-08 00:59:54.679804 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-08 00:59:54.679810 | orchestrator | Thursday 08 January 2026 00:57:57 +0000 (0:00:00.209) 0:00:10.114 ****** 2026-01-08 00:59:54.679816 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.679822 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.679828 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.679834 | orchestrator | 2026-01-08 00:59:54.679840 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-08 00:59:54.679845 | orchestrator | Thursday 08 January 2026 00:57:58 +0000 (0:00:00.492) 0:00:10.606 ****** 2026-01-08 00:59:54.679851 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-08 00:59:54.679857 | orchestrator | 2026-01-08 00:59:54.679863 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-08 00:59:54.679871 | orchestrator | Thursday 08 January 2026 00:57:59 +0000 (0:00:01.556) 0:00:12.163 ****** 2026-01-08 00:59:54.679877 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.679883 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.679889 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.679894 | orchestrator | 2026-01-08 00:59:54.679901 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-08 00:59:54.679910 | orchestrator | Thursday 08 January 2026 00:57:59 +0000 (0:00:00.331) 0:00:12.494 ****** 2026-01-08 00:59:54.679916 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.679922 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.679928 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.679934 | orchestrator | 2026-01-08 00:59:54.679940 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-08 00:59:54.679946 | orchestrator | Thursday 08 January 2026 00:58:00 +0000 (0:00:00.390) 0:00:12.884 ****** 2026-01-08 00:59:54.679968 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.679974 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.679980 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.679986 | orchestrator | 2026-01-08 00:59:54.679993 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-08 00:59:54.680013 | orchestrator | Thursday 08 January 2026 00:58:00 +0000 (0:00:00.506) 0:00:13.391 ****** 2026-01-08 00:59:54.680019 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.680025 | orchestrator | 2026-01-08 00:59:54.680031 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-08 00:59:54.680037 | orchestrator | Thursday 08 January 2026 00:58:01 +0000 (0:00:00.153) 0:00:13.545 ****** 2026-01-08 00:59:54.680043 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.680049 | orchestrator | 2026-01-08 00:59:54.680055 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-08 00:59:54.680061 | orchestrator | Thursday 08 January 2026 00:58:01 +0000 (0:00:00.280) 0:00:13.825 ****** 2026-01-08 00:59:54.680067 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.680073 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.680078 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.680084 | orchestrator | 2026-01-08 00:59:54.680090 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-08 00:59:54.680096 | orchestrator | Thursday 08 January 2026 00:58:01 +0000 (0:00:00.301) 0:00:14.127 ****** 2026-01-08 00:59:54.680102 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.680108 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.680114 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.680120 | orchestrator | 2026-01-08 00:59:54.680126 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-08 00:59:54.680132 | orchestrator | Thursday 08 January 2026 00:58:01 +0000 (0:00:00.320) 0:00:14.448 ****** 2026-01-08 00:59:54.680138 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.680144 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.680149 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.680155 | orchestrator | 2026-01-08 00:59:54.680211 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-08 00:59:54.680254 | orchestrator | Thursday 08 January 2026 00:58:02 +0000 (0:00:00.535) 0:00:14.984 ****** 2026-01-08 00:59:54.680263 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.680269 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.680275 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.680281 | orchestrator | 2026-01-08 00:59:54.680287 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-08 00:59:54.680292 | orchestrator | Thursday 08 January 2026 00:58:02 +0000 (0:00:00.327) 0:00:15.311 ****** 2026-01-08 00:59:54.680298 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.680304 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.680310 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.680316 | orchestrator | 2026-01-08 00:59:54.680324 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-08 00:59:54.680331 | orchestrator | Thursday 08 January 2026 00:58:03 +0000 (0:00:00.351) 0:00:15.663 ****** 2026-01-08 00:59:54.680337 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.680343 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.680349 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.680361 | orchestrator | 2026-01-08 00:59:54.680385 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-08 00:59:54.680392 | orchestrator | Thursday 08 January 2026 00:58:03 +0000 (0:00:00.319) 0:00:15.983 ****** 2026-01-08 00:59:54.680397 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.680403 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.680408 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.680413 | orchestrator | 2026-01-08 00:59:54.680418 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-08 00:59:54.680424 | orchestrator | Thursday 08 January 2026 00:58:03 +0000 (0:00:00.532) 0:00:16.515 ****** 2026-01-08 00:59:54.680430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2587794--ee13--56a9--b71d--149b2fd55b33-osd--block--a2587794--ee13--56a9--b71d--149b2fd55b33', 'dm-uuid-LVM-bEENzoABaKlXIVix9f7oeh01iGEYhwYje5ILS3OJKDIlIgaK2J1mQi3X1kQLOMsS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--703f1367--865b--52a8--8f96--c728fe171d20-osd--block--703f1367--865b--52a8--8f96--c728fe171d20', 'dm-uuid-LVM-HNtczakD2ja3G1Vo2m6WZZaGI4em8Ptu21dROg2ZrlYRnSlALMwv07zh00XTq3Jz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part1', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part14', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part15', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part16', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a2587794--ee13--56a9--b71d--149b2fd55b33-osd--block--a2587794--ee13--56a9--b71d--149b2fd55b33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1XbyXP-e8gb-I4vr-LRiY-ExbH-DF2J-vfvv0K', 'scsi-0QEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82', 'scsi-SQEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--738668c3--85d9--5999--8ba6--58353e2d69fe-osd--block--738668c3--85d9--5999--8ba6--58353e2d69fe', 'dm-uuid-LVM-V6XNBw63PUUGEqjR32uErniLBklwwxqrPlXQfKTbKJqfnHVANQQvB8h0YVlx4Mow'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--703f1367--865b--52a8--8f96--c728fe171d20-osd--block--703f1367--865b--52a8--8f96--c728fe171d20'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8z1lYd-cIsw-fkBh-faUa-wXb7-czkT-xlmZfI', 'scsi-0QEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea', 'scsi-SQEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3efd50ac--0c86--56a3--96dd--80e79744aaab-osd--block--3efd50ac--0c86--56a3--96dd--80e79744aaab', 'dm-uuid-LVM-AdLH4Bzf4albY0ZKx0mHhTp84P7qfNw2mJdNadtBaDkMxt1H9L9WLPqmqsmsA33M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb', 'scsi-SQEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680630 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.680642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--738668c3--85d9--5999--8ba6--58353e2d69fe-osd--block--738668c3--85d9--5999--8ba6--58353e2d69fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SKcPqy-GBza-EcQ9-39iv-knfu-8xbO-YhTNYD', 'scsi-0QEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b', 'scsi-SQEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--3efd50ac--0c86--56a3--96dd--80e79744aaab-osd--block--3efd50ac--0c86--56a3--96dd--80e79744aaab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1XcJBj-e4Zo-3zFx-nX02-BuBE-lIU7-9n3Hhr', 'scsi-0QEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181', 'scsi-SQEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd', 'scsi-SQEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e7c35fc3--220b--5a3c--9d36--601219d17f28-osd--block--e7c35fc3--220b--5a3c--9d36--601219d17f28', 'dm-uuid-LVM-XuDkDnBuxUAkQxcjcNSDHfD1ciReVtRA6WECqtp650LVlgJO2Ty9MSobiJ12IINR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680738 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.680751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1538380d--5182--5482--9616--e6fa16e7f592-osd--block--1538380d--5182--5482--9616--e6fa16e7f592', 'dm-uuid-LVM-nEL93Xw1a3nKIgzDWGLG37Ki0dfG35InHkcT1Pv33vgx09JnaiRx3cG9AVL1mEMd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-08 00:59:54.680828 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e7c35fc3--220b--5a3c--9d36--601219d17f28-osd--block--e7c35fc3--220b--5a3c--9d36--601219d17f28'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WYjp2M-oBFi-jndj-unti-K3JK-LgdU-NtU3Qm', 'scsi-0QEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490', 'scsi-SQEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1538380d--5182--5482--9616--e6fa16e7f592-osd--block--1538380d--5182--5482--9616--e6fa16e7f592'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fQ8LPb-Dq42-HIia-0FfU-1SGY-R4UN-qL0FTy', 'scsi-0QEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0', 'scsi-SQEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42', 'scsi-SQEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-08 00:59:54.680872 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.680879 | orchestrator | 2026-01-08 00:59:54.680886 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-08 00:59:54.680893 | orchestrator | Thursday 08 January 2026 00:58:04 +0000 (0:00:00.565) 0:00:17.081 ****** 2026-01-08 00:59:54.680900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a2587794--ee13--56a9--b71d--149b2fd55b33-osd--block--a2587794--ee13--56a9--b71d--149b2fd55b33', 'dm-uuid-LVM-bEENzoABaKlXIVix9f7oeh01iGEYhwYje5ILS3OJKDIlIgaK2J1mQi3X1kQLOMsS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--703f1367--865b--52a8--8f96--c728fe171d20-osd--block--703f1367--865b--52a8--8f96--c728fe171d20', 'dm-uuid-LVM-HNtczakD2ja3G1Vo2m6WZZaGI4em8Ptu21dROg2ZrlYRnSlALMwv07zh00XTq3Jz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.680999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part1', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part14', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part15', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part16', 'scsi-SQEMU_QEMU_HARDDISK_59a47b62-1fe1-4b4a-8031-07381f636ec2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681010 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--738668c3--85d9--5999--8ba6--58353e2d69fe-osd--block--738668c3--85d9--5999--8ba6--58353e2d69fe', 'dm-uuid-LVM-V6XNBw63PUUGEqjR32uErniLBklwwxqrPlXQfKTbKJqfnHVANQQvB8h0YVlx4Mow'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681016 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a2587794--ee13--56a9--b71d--149b2fd55b33-osd--block--a2587794--ee13--56a9--b71d--149b2fd55b33'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1XbyXP-e8gb-I4vr-LRiY-ExbH-DF2J-vfvv0K', 'scsi-0QEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82', 'scsi-SQEMU_QEMU_HARDDISK_124f655d-2588-4acd-9ece-c76299342e82'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681028 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3efd50ac--0c86--56a3--96dd--80e79744aaab-osd--block--3efd50ac--0c86--56a3--96dd--80e79744aaab', 'dm-uuid-LVM-AdLH4Bzf4albY0ZKx0mHhTp84P7qfNw2mJdNadtBaDkMxt1H9L9WLPqmqsmsA33M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681040 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--703f1367--865b--52a8--8f96--c728fe171d20-osd--block--703f1367--865b--52a8--8f96--c728fe171d20'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8z1lYd-cIsw-fkBh-faUa-wXb7-czkT-xlmZfI', 'scsi-0QEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea', 'scsi-SQEMU_QEMU_HARDDISK_6a42190a-2484-4e8d-b5ec-29d2f01455ea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681046 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681055 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb', 'scsi-SQEMU_QEMU_HARDDISK_0e18c249-c135-4dbc-997b-e877fb7ddadb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681068 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681076 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681094 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681101 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.681107 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681132 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e7c35fc3--220b--5a3c--9d36--601219d17f28-osd--block--e7c35fc3--220b--5a3c--9d36--601219d17f28', 'dm-uuid-LVM-XuDkDnBuxUAkQxcjcNSDHfD1ciReVtRA6WECqtp650LVlgJO2Ty9MSobiJ12IINR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681138 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681148 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1538380d--5182--5482--9616--e6fa16e7f592-osd--block--1538380d--5182--5482--9616--e6fa16e7f592', 'dm-uuid-LVM-nEL93Xw1a3nKIgzDWGLG37Ki0dfG35InHkcT1Pv33vgx09JnaiRx3cG9AVL1mEMd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681157 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part1', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part14', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part15', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part16', 'scsi-SQEMU_QEMU_HARDDISK_33ab2247-e822-4059-8ff1-fecc56de3eb1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--738668c3--85d9--5999--8ba6--58353e2d69fe-osd--block--738668c3--85d9--5999--8ba6--58353e2d69fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-SKcPqy-GBza-EcQ9-39iv-knfu-8xbO-YhTNYD', 'scsi-0QEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b', 'scsi-SQEMU_QEMU_HARDDISK_d8742191-d797-4305-9777-b3b4a7e3f85b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681215 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--3efd50ac--0c86--56a3--96dd--80e79744aaab-osd--block--3efd50ac--0c86--56a3--96dd--80e79744aaab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1XcJBj-e4Zo-3zFx-nX02-BuBE-lIU7-9n3Hhr', 'scsi-0QEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181', 'scsi-SQEMU_QEMU_HARDDISK_c2b7f2d1-b409-4164-9c96-a325340a2181'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd', 'scsi-SQEMU_QEMU_HARDDISK_7a76cbc0-0137-4ff9-923d-b4b1dcc050dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681239 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681249 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681256 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681262 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.681271 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681281 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681288 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681298 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_2593653b-a2f8-487c-a2d2-926b6edd94aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681307 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e7c35fc3--220b--5a3c--9d36--601219d17f28-osd--block--e7c35fc3--220b--5a3c--9d36--601219d17f28'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WYjp2M-oBFi-jndj-unti-K3JK-LgdU-NtU3Qm', 'scsi-0QEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490', 'scsi-SQEMU_QEMU_HARDDISK_c2709d8e-63c5-44e7-8dd5-568d2763b490'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681318 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1538380d--5182--5482--9616--e6fa16e7f592-osd--block--1538380d--5182--5482--9616--e6fa16e7f592'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fQ8LPb-Dq42-HIia-0FfU-1SGY-R4UN-qL0FTy', 'scsi-0QEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0', 'scsi-SQEMU_QEMU_HARDDISK_083917d3-ae8a-40d6-964f-5c24c6020ef0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681325 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42', 'scsi-SQEMU_QEMU_HARDDISK_ca658e2c-6884-4cb0-a984-2a30c4218b42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681335 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-08-00-03-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-08 00:59:54.681341 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.681348 | orchestrator | 2026-01-08 00:59:54.681354 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-08 00:59:54.681361 | orchestrator | Thursday 08 January 2026 00:58:05 +0000 (0:00:00.624) 0:00:17.705 ****** 2026-01-08 00:59:54.681368 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.681374 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.681381 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.681387 | orchestrator | 2026-01-08 00:59:54.681394 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-08 00:59:54.681404 | orchestrator | Thursday 08 January 2026 00:58:05 +0000 (0:00:00.727) 0:00:18.433 ****** 2026-01-08 00:59:54.681411 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.681417 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.681423 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.681430 | orchestrator | 2026-01-08 00:59:54.681436 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-08 00:59:54.681442 | orchestrator | Thursday 08 January 2026 00:58:06 +0000 (0:00:00.508) 0:00:18.942 ****** 2026-01-08 00:59:54.681449 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.681456 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.681462 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.681468 | orchestrator | 2026-01-08 00:59:54.681474 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-08 00:59:54.681485 | orchestrator | Thursday 08 January 2026 00:58:07 +0000 (0:00:00.666) 0:00:19.609 ****** 2026-01-08 00:59:54.681491 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.681498 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.681505 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.681510 | orchestrator | 2026-01-08 00:59:54.681516 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-08 00:59:54.681523 | orchestrator | Thursday 08 January 2026 00:58:07 +0000 (0:00:00.316) 0:00:19.925 ****** 2026-01-08 00:59:54.681529 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.681534 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.681541 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.681547 | orchestrator | 2026-01-08 00:59:54.681554 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-08 00:59:54.681560 | orchestrator | Thursday 08 January 2026 00:58:07 +0000 (0:00:00.421) 0:00:20.347 ****** 2026-01-08 00:59:54.681567 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.681574 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.681581 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.681587 | orchestrator | 2026-01-08 00:59:54.681593 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-08 00:59:54.681600 | orchestrator | Thursday 08 January 2026 00:58:08 +0000 (0:00:00.535) 0:00:20.883 ****** 2026-01-08 00:59:54.681607 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-08 00:59:54.681614 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-08 00:59:54.681620 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-08 00:59:54.681626 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-08 00:59:54.681633 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-08 00:59:54.681639 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-08 00:59:54.681645 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-08 00:59:54.681651 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-08 00:59:54.681657 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-08 00:59:54.681663 | orchestrator | 2026-01-08 00:59:54.681669 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-08 00:59:54.681676 | orchestrator | Thursday 08 January 2026 00:58:09 +0000 (0:00:00.845) 0:00:21.728 ****** 2026-01-08 00:59:54.681682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-08 00:59:54.681688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-08 00:59:54.681695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-08 00:59:54.681701 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.681707 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-08 00:59:54.681713 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-08 00:59:54.681719 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-08 00:59:54.681725 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.681737 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-08 00:59:54.681743 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-08 00:59:54.681749 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-08 00:59:54.681755 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.681761 | orchestrator | 2026-01-08 00:59:54.681767 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-08 00:59:54.681773 | orchestrator | Thursday 08 January 2026 00:58:09 +0000 (0:00:00.381) 0:00:22.110 ****** 2026-01-08 00:59:54.681779 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 00:59:54.681785 | orchestrator | 2026-01-08 00:59:54.681791 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-08 00:59:54.681799 | orchestrator | Thursday 08 January 2026 00:58:10 +0000 (0:00:00.730) 0:00:22.841 ****** 2026-01-08 00:59:54.681812 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.681819 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.681826 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.681832 | orchestrator | 2026-01-08 00:59:54.681838 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-08 00:59:54.681844 | orchestrator | Thursday 08 January 2026 00:58:10 +0000 (0:00:00.326) 0:00:23.167 ****** 2026-01-08 00:59:54.681850 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.681856 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.681862 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.681868 | orchestrator | 2026-01-08 00:59:54.681874 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-08 00:59:54.681881 | orchestrator | Thursday 08 January 2026 00:58:10 +0000 (0:00:00.321) 0:00:23.489 ****** 2026-01-08 00:59:54.681886 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.681892 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.681898 | orchestrator | skipping: [testbed-node-5] 2026-01-08 00:59:54.681904 | orchestrator | 2026-01-08 00:59:54.681910 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-08 00:59:54.681915 | orchestrator | Thursday 08 January 2026 00:58:11 +0000 (0:00:00.326) 0:00:23.816 ****** 2026-01-08 00:59:54.681921 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.681927 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.681933 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.681939 | orchestrator | 2026-01-08 00:59:54.681945 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-08 00:59:54.681951 | orchestrator | Thursday 08 January 2026 00:58:12 +0000 (0:00:00.905) 0:00:24.721 ****** 2026-01-08 00:59:54.681957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:59:54.681963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:59:54.681968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:59:54.681974 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.681980 | orchestrator | 2026-01-08 00:59:54.681991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-08 00:59:54.681996 | orchestrator | Thursday 08 January 2026 00:58:12 +0000 (0:00:00.388) 0:00:25.110 ****** 2026-01-08 00:59:54.682003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:59:54.682009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:59:54.682054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:59:54.682060 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.682066 | orchestrator | 2026-01-08 00:59:54.682073 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-08 00:59:54.682080 | orchestrator | Thursday 08 January 2026 00:58:12 +0000 (0:00:00.391) 0:00:25.502 ****** 2026-01-08 00:59:54.682087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 00:59:54.682102 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 00:59:54.682108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 00:59:54.682115 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.682122 | orchestrator | 2026-01-08 00:59:54.682129 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-08 00:59:54.682137 | orchestrator | Thursday 08 January 2026 00:58:13 +0000 (0:00:00.377) 0:00:25.880 ****** 2026-01-08 00:59:54.682144 | orchestrator | ok: [testbed-node-3] 2026-01-08 00:59:54.682149 | orchestrator | ok: [testbed-node-4] 2026-01-08 00:59:54.682155 | orchestrator | ok: [testbed-node-5] 2026-01-08 00:59:54.682176 | orchestrator | 2026-01-08 00:59:54.682183 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-08 00:59:54.682189 | orchestrator | Thursday 08 January 2026 00:58:13 +0000 (0:00:00.323) 0:00:26.204 ****** 2026-01-08 00:59:54.682195 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-08 00:59:54.682201 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-08 00:59:54.682207 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-08 00:59:54.682212 | orchestrator | 2026-01-08 00:59:54.682218 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-08 00:59:54.682223 | orchestrator | Thursday 08 January 2026 00:58:14 +0000 (0:00:00.503) 0:00:26.707 ****** 2026-01-08 00:59:54.682229 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-08 00:59:54.682234 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:59:54.682240 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:59:54.682245 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-08 00:59:54.682251 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-08 00:59:54.682256 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-08 00:59:54.682261 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-08 00:59:54.682266 | orchestrator | 2026-01-08 00:59:54.682272 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-08 00:59:54.682277 | orchestrator | Thursday 08 January 2026 00:58:15 +0000 (0:00:01.069) 0:00:27.777 ****** 2026-01-08 00:59:54.682283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-08 00:59:54.682288 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-08 00:59:54.682293 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-08 00:59:54.682299 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-08 00:59:54.682304 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-08 00:59:54.682310 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-08 00:59:54.682322 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-08 00:59:54.682327 | orchestrator | 2026-01-08 00:59:54.682333 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-08 00:59:54.682338 | orchestrator | Thursday 08 January 2026 00:58:17 +0000 (0:00:02.008) 0:00:29.785 ****** 2026-01-08 00:59:54.682344 | orchestrator | skipping: [testbed-node-3] 2026-01-08 00:59:54.682349 | orchestrator | skipping: [testbed-node-4] 2026-01-08 00:59:54.682355 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-08 00:59:54.682361 | orchestrator | 2026-01-08 00:59:54.682367 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-08 00:59:54.682372 | orchestrator | Thursday 08 January 2026 00:58:17 +0000 (0:00:00.400) 0:00:30.186 ****** 2026-01-08 00:59:54.682379 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-08 00:59:54.682391 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-08 00:59:54.682400 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-08 00:59:54.682407 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-08 00:59:54.682413 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-08 00:59:54.682419 | orchestrator | 2026-01-08 00:59:54.682426 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-08 00:59:54.682432 | orchestrator | Thursday 08 January 2026 00:59:00 +0000 (0:00:43.135) 0:01:13.321 ****** 2026-01-08 00:59:54.682439 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682445 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682452 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682459 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682465 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682471 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682477 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-08 00:59:54.682484 | orchestrator | 2026-01-08 00:59:54.682491 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-08 00:59:54.682498 | orchestrator | Thursday 08 January 2026 00:59:23 +0000 (0:00:23.105) 0:01:36.426 ****** 2026-01-08 00:59:54.682504 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682512 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682520 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682529 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682536 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682544 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682552 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-08 00:59:54.682558 | orchestrator | 2026-01-08 00:59:54.682565 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-08 00:59:54.682572 | orchestrator | Thursday 08 January 2026 00:59:36 +0000 (0:00:12.189) 0:01:48.616 ****** 2026-01-08 00:59:54.682579 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682586 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-08 00:59:54.682599 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-08 00:59:54.682607 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682615 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-08 00:59:54.682627 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-08 00:59:54.682634 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682642 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-08 00:59:54.682648 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-08 00:59:54.682656 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682663 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-08 00:59:54.682670 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-08 00:59:54.682678 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682685 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-08 00:59:54.682692 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-08 00:59:54.682699 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-08 00:59:54.682706 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-08 00:59:54.682713 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-08 00:59:54.682721 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-08 00:59:54.682763 | orchestrator | 2026-01-08 00:59:54.682771 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 00:59:54.682786 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-08 00:59:54.682795 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-08 00:59:54.682803 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-08 00:59:54.682810 | orchestrator | 2026-01-08 00:59:54.682817 | orchestrator | 2026-01-08 00:59:54.682824 | orchestrator | 2026-01-08 00:59:54.682831 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 00:59:54.682837 | orchestrator | Thursday 08 January 2026 00:59:54 +0000 (0:00:18.099) 0:02:06.716 ****** 2026-01-08 00:59:54.682844 | orchestrator | =============================================================================== 2026-01-08 00:59:54.682851 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.14s 2026-01-08 00:59:54.682858 | orchestrator | generate keys ---------------------------------------------------------- 23.11s 2026-01-08 00:59:54.682863 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.10s 2026-01-08 00:59:54.682869 | orchestrator | get keys from monitors ------------------------------------------------- 12.19s 2026-01-08 00:59:54.682875 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.01s 2026-01-08 00:59:54.682881 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.00s 2026-01-08 00:59:54.682888 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.56s 2026-01-08 00:59:54.682894 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2026-01-08 00:59:54.682900 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.92s 2026-01-08 00:59:54.682906 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.91s 2026-01-08 00:59:54.682917 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2026-01-08 00:59:54.682922 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.82s 2026-01-08 00:59:54.682928 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.74s 2026-01-08 00:59:54.682934 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2026-01-08 00:59:54.682940 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2026-01-08 00:59:54.682945 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-01-08 00:59:54.682951 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.67s 2026-01-08 00:59:54.682956 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2026-01-08 00:59:54.682962 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2026-01-08 00:59:54.682967 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2026-01-08 00:59:54.682973 | orchestrator | 2026-01-08 00:59:54 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:54.682979 | orchestrator | 2026-01-08 00:59:54 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:54.682985 | orchestrator | 2026-01-08 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-01-08 00:59:57.721917 | orchestrator | 2026-01-08 00:59:57 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 00:59:57.722001 | orchestrator | 2026-01-08 00:59:57 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 00:59:57.724427 | orchestrator | 2026-01-08 00:59:57 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 00:59:57.724988 | orchestrator | 2026-01-08 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:00.772783 | orchestrator | 2026-01-08 01:00:00 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:00.775349 | orchestrator | 2026-01-08 01:00:00 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:00.776842 | orchestrator | 2026-01-08 01:00:00 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:00.776910 | orchestrator | 2026-01-08 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:03.830429 | orchestrator | 2026-01-08 01:00:03 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:03.831762 | orchestrator | 2026-01-08 01:00:03 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:03.833097 | orchestrator | 2026-01-08 01:00:03 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:03.833170 | orchestrator | 2026-01-08 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:06.875188 | orchestrator | 2026-01-08 01:00:06 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:06.879669 | orchestrator | 2026-01-08 01:00:06 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:06.882926 | orchestrator | 2026-01-08 01:00:06 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:06.882977 | orchestrator | 2026-01-08 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:09.924797 | orchestrator | 2026-01-08 01:00:09 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:09.928745 | orchestrator | 2026-01-08 01:00:09 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:09.930882 | orchestrator | 2026-01-08 01:00:09 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:09.930938 | orchestrator | 2026-01-08 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:12.983331 | orchestrator | 2026-01-08 01:00:12 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:12.985974 | orchestrator | 2026-01-08 01:00:12 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:12.989410 | orchestrator | 2026-01-08 01:00:12 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:12.989476 | orchestrator | 2026-01-08 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:16.035889 | orchestrator | 2026-01-08 01:00:16 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:16.037572 | orchestrator | 2026-01-08 01:00:16 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:16.039688 | orchestrator | 2026-01-08 01:00:16 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:16.039765 | orchestrator | 2026-01-08 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:19.101758 | orchestrator | 2026-01-08 01:00:19 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:19.102553 | orchestrator | 2026-01-08 01:00:19 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:19.104768 | orchestrator | 2026-01-08 01:00:19 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:19.104838 | orchestrator | 2026-01-08 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:22.158686 | orchestrator | 2026-01-08 01:00:22 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:22.162987 | orchestrator | 2026-01-08 01:00:22 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:22.166396 | orchestrator | 2026-01-08 01:00:22 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:22.166463 | orchestrator | 2026-01-08 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:25.212079 | orchestrator | 2026-01-08 01:00:25 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:25.214050 | orchestrator | 2026-01-08 01:00:25 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:25.215986 | orchestrator | 2026-01-08 01:00:25 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:25.216034 | orchestrator | 2026-01-08 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:28.259330 | orchestrator | 2026-01-08 01:00:28 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:28.261763 | orchestrator | 2026-01-08 01:00:28 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:28.262456 | orchestrator | 2026-01-08 01:00:28 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:28.262486 | orchestrator | 2026-01-08 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:31.302692 | orchestrator | 2026-01-08 01:00:31 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:31.303669 | orchestrator | 2026-01-08 01:00:31 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:31.304683 | orchestrator | 2026-01-08 01:00:31 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:31.304719 | orchestrator | 2026-01-08 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:34.352146 | orchestrator | 2026-01-08 01:00:34 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:34.354211 | orchestrator | 2026-01-08 01:00:34 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:34.355863 | orchestrator | 2026-01-08 01:00:34 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state STARTED 2026-01-08 01:00:34.355904 | orchestrator | 2026-01-08 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:37.407015 | orchestrator | 2026-01-08 01:00:37 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:37.410166 | orchestrator | 2026-01-08 01:00:37 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:00:37.412826 | orchestrator | 2026-01-08 01:00:37 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:37.414576 | orchestrator | 2026-01-08 01:00:37 | INFO  | Task 18ce2184-342e-4323-a66e-37a19c5cc46e is in state SUCCESS 2026-01-08 01:00:37.414604 | orchestrator | 2026-01-08 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:40.457591 | orchestrator | 2026-01-08 01:00:40 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:40.461691 | orchestrator | 2026-01-08 01:00:40 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:00:40.465350 | orchestrator | 2026-01-08 01:00:40 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:40.465405 | orchestrator | 2026-01-08 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:43.511697 | orchestrator | 2026-01-08 01:00:43 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:43.512882 | orchestrator | 2026-01-08 01:00:43 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:00:43.514078 | orchestrator | 2026-01-08 01:00:43 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:43.514098 | orchestrator | 2026-01-08 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:46.566595 | orchestrator | 2026-01-08 01:00:46 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:46.568772 | orchestrator | 2026-01-08 01:00:46 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:00:46.571715 | orchestrator | 2026-01-08 01:00:46 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:46.572133 | orchestrator | 2026-01-08 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:49.613437 | orchestrator | 2026-01-08 01:00:49 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:49.613533 | orchestrator | 2026-01-08 01:00:49 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:00:49.614888 | orchestrator | 2026-01-08 01:00:49 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:49.615134 | orchestrator | 2026-01-08 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:52.655787 | orchestrator | 2026-01-08 01:00:52 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:52.657455 | orchestrator | 2026-01-08 01:00:52 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:00:52.659309 | orchestrator | 2026-01-08 01:00:52 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:52.659374 | orchestrator | 2026-01-08 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:55.708739 | orchestrator | 2026-01-08 01:00:55 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:55.710558 | orchestrator | 2026-01-08 01:00:55 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:00:55.712231 | orchestrator | 2026-01-08 01:00:55 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:55.712400 | orchestrator | 2026-01-08 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:00:58.762629 | orchestrator | 2026-01-08 01:00:58 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state STARTED 2026-01-08 01:00:58.767651 | orchestrator | 2026-01-08 01:00:58 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:00:58.767700 | orchestrator | 2026-01-08 01:00:58 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:00:58.767707 | orchestrator | 2026-01-08 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:01.816460 | orchestrator | 2026-01-08 01:01:01 | INFO  | Task 7af06e14-dbc0-43fd-b3fb-cad4c541ba78 is in state SUCCESS 2026-01-08 01:01:01.818606 | orchestrator | 2026-01-08 01:01:01.818668 | orchestrator | 2026-01-08 01:01:01.818678 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-08 01:01:01.818686 | orchestrator | 2026-01-08 01:01:01.818693 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-08 01:01:01.818700 | orchestrator | Thursday 08 January 2026 00:59:59 +0000 (0:00:00.161) 0:00:00.161 ****** 2026-01-08 01:01:01.818706 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-08 01:01:01.818711 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.818715 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.818719 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-08 01:01:01.818723 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.818727 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-08 01:01:01.818731 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-08 01:01:01.818734 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-08 01:01:01.818738 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-08 01:01:01.818742 | orchestrator | 2026-01-08 01:01:01.818746 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-08 01:01:01.818750 | orchestrator | Thursday 08 January 2026 01:00:03 +0000 (0:00:04.945) 0:00:05.107 ****** 2026-01-08 01:01:01.818754 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-08 01:01:01.818758 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.818762 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.818766 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-08 01:01:01.818770 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.818774 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-08 01:01:01.818791 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-08 01:01:01.818795 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-08 01:01:01.818799 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-08 01:01:01.818803 | orchestrator | 2026-01-08 01:01:01.818807 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-08 01:01:01.818810 | orchestrator | Thursday 08 January 2026 01:00:07 +0000 (0:00:04.000) 0:00:09.107 ****** 2026-01-08 01:01:01.818815 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-08 01:01:01.818819 | orchestrator | 2026-01-08 01:01:01.818823 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-08 01:01:01.818826 | orchestrator | Thursday 08 January 2026 01:00:08 +0000 (0:00:00.996) 0:00:10.104 ****** 2026-01-08 01:01:01.818830 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-08 01:01:01.818834 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.818838 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.818842 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-08 01:01:01.818846 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.818850 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-08 01:01:01.818853 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-08 01:01:01.818857 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-08 01:01:01.818861 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-08 01:01:01.818865 | orchestrator | 2026-01-08 01:01:01.818869 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-08 01:01:01.818872 | orchestrator | Thursday 08 January 2026 01:00:24 +0000 (0:00:15.091) 0:00:25.195 ****** 2026-01-08 01:01:01.818876 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-08 01:01:01.818938 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-08 01:01:01.818945 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-08 01:01:01.818949 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-08 01:01:01.818993 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-08 01:01:01.818999 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-08 01:01:01.819006 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-08 01:01:01.819012 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-08 01:01:01.819021 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-08 01:01:01.819175 | orchestrator | 2026-01-08 01:01:01.819180 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-08 01:01:01.819184 | orchestrator | Thursday 08 January 2026 01:00:28 +0000 (0:00:04.143) 0:00:29.339 ****** 2026-01-08 01:01:01.819189 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-08 01:01:01.819192 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.819196 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.819205 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-08 01:01:01.819209 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-08 01:01:01.819213 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-08 01:01:01.819217 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-08 01:01:01.819221 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-08 01:01:01.819225 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-08 01:01:01.819228 | orchestrator | 2026-01-08 01:01:01.819232 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:01:01.819236 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:01:01.819241 | orchestrator | 2026-01-08 01:01:01.819245 | orchestrator | 2026-01-08 01:01:01.819253 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:01:01.819263 | orchestrator | Thursday 08 January 2026 01:00:35 +0000 (0:00:07.358) 0:00:36.698 ****** 2026-01-08 01:01:01.819270 | orchestrator | =============================================================================== 2026-01-08 01:01:01.819276 | orchestrator | Write ceph keys to the share directory --------------------------------- 15.09s 2026-01-08 01:01:01.819283 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.36s 2026-01-08 01:01:01.819289 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.95s 2026-01-08 01:01:01.819296 | orchestrator | Check if target directories exist --------------------------------------- 4.14s 2026-01-08 01:01:01.819303 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.00s 2026-01-08 01:01:01.819366 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2026-01-08 01:01:01.819374 | orchestrator | 2026-01-08 01:01:01.819380 | orchestrator | 2026-01-08 01:01:01.819467 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:01:01.819472 | orchestrator | 2026-01-08 01:01:01.819476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:01:01.819480 | orchestrator | Thursday 08 January 2026 00:59:22 +0000 (0:00:00.286) 0:00:00.286 ****** 2026-01-08 01:01:01.819483 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.819487 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.819491 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.819495 | orchestrator | 2026-01-08 01:01:01.819499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:01:01.819503 | orchestrator | Thursday 08 January 2026 00:59:22 +0000 (0:00:00.300) 0:00:00.587 ****** 2026-01-08 01:01:01.819506 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-08 01:01:01.819510 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-08 01:01:01.819514 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-08 01:01:01.819518 | orchestrator | 2026-01-08 01:01:01.819522 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-08 01:01:01.819526 | orchestrator | 2026-01-08 01:01:01.819529 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-08 01:01:01.819533 | orchestrator | Thursday 08 January 2026 00:59:23 +0000 (0:00:00.432) 0:00:01.019 ****** 2026-01-08 01:01:01.819537 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:01:01.819541 | orchestrator | 2026-01-08 01:01:01.819545 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-08 01:01:01.819549 | orchestrator | Thursday 08 January 2026 00:59:23 +0000 (0:00:00.522) 0:00:01.542 ****** 2026-01-08 01:01:01.819567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 01:01:01.819579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 01:01:01.819593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 01:01:01.819598 | orchestrator | 2026-01-08 01:01:01.819602 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-08 01:01:01.819606 | orchestrator | Thursday 08 January 2026 00:59:24 +0000 (0:00:01.248) 0:00:02.791 ****** 2026-01-08 01:01:01.819610 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.819614 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.819618 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.819622 | orchestrator | 2026-01-08 01:01:01.819626 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-08 01:01:01.819630 | orchestrator | Thursday 08 January 2026 00:59:25 +0000 (0:00:00.464) 0:00:03.255 ****** 2026-01-08 01:01:01.819633 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-08 01:01:01.819637 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-08 01:01:01.819641 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-08 01:01:01.819645 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-08 01:01:01.819649 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-08 01:01:01.819653 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-08 01:01:01.819657 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-08 01:01:01.819660 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-08 01:01:01.819667 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-08 01:01:01.819671 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-08 01:01:01.819675 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-08 01:01:01.819679 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-08 01:01:01.819683 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-08 01:01:01.819686 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-08 01:01:01.819690 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-08 01:01:01.819694 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-08 01:01:01.819698 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-08 01:01:01.819702 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-08 01:01:01.819705 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-08 01:01:01.819711 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-08 01:01:01.819717 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-08 01:01:01.819721 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-08 01:01:01.819725 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-08 01:01:01.819729 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-08 01:01:01.819733 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-08 01:01:01.819738 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-08 01:01:01.819742 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-08 01:01:01.819746 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-08 01:01:01.819750 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-08 01:01:01.819754 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-08 01:01:01.819758 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-08 01:01:01.819762 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-08 01:01:01.819765 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-08 01:01:01.819770 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-08 01:01:01.819774 | orchestrator | 2026-01-08 01:01:01.819778 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.819782 | orchestrator | Thursday 08 January 2026 00:59:26 +0000 (0:00:00.793) 0:00:04.049 ****** 2026-01-08 01:01:01.819785 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.819791 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.819795 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.819799 | orchestrator | 2026-01-08 01:01:01.819803 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.819807 | orchestrator | Thursday 08 January 2026 00:59:26 +0000 (0:00:00.314) 0:00:04.363 ****** 2026-01-08 01:01:01.819811 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.819815 | orchestrator | 2026-01-08 01:01:01.819819 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.819822 | orchestrator | Thursday 08 January 2026 00:59:26 +0000 (0:00:00.133) 0:00:04.496 ****** 2026-01-08 01:01:01.819826 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.819830 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.819834 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.819838 | orchestrator | 2026-01-08 01:01:01.819842 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.819845 | orchestrator | Thursday 08 January 2026 00:59:26 +0000 (0:00:00.499) 0:00:04.996 ****** 2026-01-08 01:01:01.819849 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.819853 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.819857 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.819861 | orchestrator | 2026-01-08 01:01:01.819865 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.819869 | orchestrator | Thursday 08 January 2026 00:59:27 +0000 (0:00:00.360) 0:00:05.357 ****** 2026-01-08 01:01:01.819872 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.819876 | orchestrator | 2026-01-08 01:01:01.819880 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.819884 | orchestrator | Thursday 08 January 2026 00:59:27 +0000 (0:00:00.122) 0:00:05.479 ****** 2026-01-08 01:01:01.819888 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.819892 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.819896 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.819900 | orchestrator | 2026-01-08 01:01:01.819903 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.819907 | orchestrator | Thursday 08 January 2026 00:59:27 +0000 (0:00:00.335) 0:00:05.814 ****** 2026-01-08 01:01:01.819911 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.819915 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.819919 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.819922 | orchestrator | 2026-01-08 01:01:01.819926 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.819930 | orchestrator | Thursday 08 January 2026 00:59:28 +0000 (0:00:00.324) 0:00:06.139 ****** 2026-01-08 01:01:01.819934 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.819938 | orchestrator | 2026-01-08 01:01:01.819942 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.819948 | orchestrator | Thursday 08 January 2026 00:59:28 +0000 (0:00:00.340) 0:00:06.480 ****** 2026-01-08 01:01:01.819954 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.819957 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.819961 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.819965 | orchestrator | 2026-01-08 01:01:01.819969 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.819973 | orchestrator | Thursday 08 January 2026 00:59:28 +0000 (0:00:00.293) 0:00:06.774 ****** 2026-01-08 01:01:01.819977 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.819981 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.819984 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.819988 | orchestrator | 2026-01-08 01:01:01.819992 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.819996 | orchestrator | Thursday 08 January 2026 00:59:29 +0000 (0:00:00.322) 0:00:07.096 ****** 2026-01-08 01:01:01.820001 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820005 | orchestrator | 2026-01-08 01:01:01.820010 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.820016 | orchestrator | Thursday 08 January 2026 00:59:29 +0000 (0:00:00.135) 0:00:07.232 ****** 2026-01-08 01:01:01.820021 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820026 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.820030 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.820035 | orchestrator | 2026-01-08 01:01:01.820040 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.820046 | orchestrator | Thursday 08 January 2026 00:59:29 +0000 (0:00:00.296) 0:00:07.529 ****** 2026-01-08 01:01:01.820053 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.820063 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.820070 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.820077 | orchestrator | 2026-01-08 01:01:01.820083 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.820090 | orchestrator | Thursday 08 January 2026 00:59:30 +0000 (0:00:00.517) 0:00:08.046 ****** 2026-01-08 01:01:01.820096 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820103 | orchestrator | 2026-01-08 01:01:01.820110 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.820117 | orchestrator | Thursday 08 January 2026 00:59:30 +0000 (0:00:00.134) 0:00:08.181 ****** 2026-01-08 01:01:01.820124 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820130 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.820136 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.820142 | orchestrator | 2026-01-08 01:01:01.820149 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.820156 | orchestrator | Thursday 08 January 2026 00:59:30 +0000 (0:00:00.309) 0:00:08.490 ****** 2026-01-08 01:01:01.820163 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.820170 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.820177 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.820183 | orchestrator | 2026-01-08 01:01:01.820189 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.820196 | orchestrator | Thursday 08 January 2026 00:59:30 +0000 (0:00:00.353) 0:00:08.843 ****** 2026-01-08 01:01:01.820203 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820210 | orchestrator | 2026-01-08 01:01:01.820217 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.820223 | orchestrator | Thursday 08 January 2026 00:59:30 +0000 (0:00:00.122) 0:00:08.966 ****** 2026-01-08 01:01:01.820230 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820236 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.820242 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.820248 | orchestrator | 2026-01-08 01:01:01.820255 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.820260 | orchestrator | Thursday 08 January 2026 00:59:31 +0000 (0:00:00.295) 0:00:09.261 ****** 2026-01-08 01:01:01.820266 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.820272 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.820278 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.820284 | orchestrator | 2026-01-08 01:01:01.820290 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.820296 | orchestrator | Thursday 08 January 2026 00:59:31 +0000 (0:00:00.610) 0:00:09.871 ****** 2026-01-08 01:01:01.820303 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820309 | orchestrator | 2026-01-08 01:01:01.820314 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.820320 | orchestrator | Thursday 08 January 2026 00:59:32 +0000 (0:00:00.150) 0:00:10.022 ****** 2026-01-08 01:01:01.820326 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820333 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.820339 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.820345 | orchestrator | 2026-01-08 01:01:01.820351 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.820367 | orchestrator | Thursday 08 January 2026 00:59:32 +0000 (0:00:00.333) 0:00:10.355 ****** 2026-01-08 01:01:01.820373 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.820379 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.820423 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.820430 | orchestrator | 2026-01-08 01:01:01.820437 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.820444 | orchestrator | Thursday 08 January 2026 00:59:32 +0000 (0:00:00.336) 0:00:10.692 ****** 2026-01-08 01:01:01.820451 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820459 | orchestrator | 2026-01-08 01:01:01.820465 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.820473 | orchestrator | Thursday 08 January 2026 00:59:32 +0000 (0:00:00.129) 0:00:10.822 ****** 2026-01-08 01:01:01.820479 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820487 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.820494 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.820502 | orchestrator | 2026-01-08 01:01:01.820509 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.820516 | orchestrator | Thursday 08 January 2026 00:59:33 +0000 (0:00:00.501) 0:00:11.323 ****** 2026-01-08 01:01:01.820524 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.820531 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.820538 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.820545 | orchestrator | 2026-01-08 01:01:01.820562 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.820569 | orchestrator | Thursday 08 January 2026 00:59:33 +0000 (0:00:00.310) 0:00:11.633 ****** 2026-01-08 01:01:01.820576 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820583 | orchestrator | 2026-01-08 01:01:01.820590 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.820598 | orchestrator | Thursday 08 January 2026 00:59:33 +0000 (0:00:00.140) 0:00:11.774 ****** 2026-01-08 01:01:01.820605 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820612 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.820619 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.820626 | orchestrator | 2026-01-08 01:01:01.820633 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-08 01:01:01.820640 | orchestrator | Thursday 08 January 2026 00:59:34 +0000 (0:00:00.278) 0:00:12.053 ****** 2026-01-08 01:01:01.820647 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:01.820654 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:01.820661 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:01.820668 | orchestrator | 2026-01-08 01:01:01.820675 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-08 01:01:01.820681 | orchestrator | Thursday 08 January 2026 00:59:34 +0000 (0:00:00.308) 0:00:12.361 ****** 2026-01-08 01:01:01.820688 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820695 | orchestrator | 2026-01-08 01:01:01.820702 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-08 01:01:01.820709 | orchestrator | Thursday 08 January 2026 00:59:34 +0000 (0:00:00.132) 0:00:12.494 ****** 2026-01-08 01:01:01.820716 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820723 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.820730 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.820737 | orchestrator | 2026-01-08 01:01:01.820743 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-08 01:01:01.820749 | orchestrator | Thursday 08 January 2026 00:59:34 +0000 (0:00:00.489) 0:00:12.983 ****** 2026-01-08 01:01:01.820756 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:01.820763 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:01:01.820770 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:01:01.820776 | orchestrator | 2026-01-08 01:01:01.820783 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-08 01:01:01.820796 | orchestrator | Thursday 08 January 2026 00:59:36 +0000 (0:00:01.843) 0:00:14.827 ****** 2026-01-08 01:01:01.820803 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-08 01:01:01.820810 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-08 01:01:01.820817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-08 01:01:01.820825 | orchestrator | 2026-01-08 01:01:01.820832 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-08 01:01:01.820840 | orchestrator | Thursday 08 January 2026 00:59:39 +0000 (0:00:02.278) 0:00:17.106 ****** 2026-01-08 01:01:01.820848 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-08 01:01:01.820855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-08 01:01:01.820863 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-08 01:01:01.820870 | orchestrator | 2026-01-08 01:01:01.820878 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-08 01:01:01.820886 | orchestrator | Thursday 08 January 2026 00:59:41 +0000 (0:00:02.465) 0:00:19.572 ****** 2026-01-08 01:01:01.820893 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-08 01:01:01.820901 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-08 01:01:01.820908 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-08 01:01:01.820916 | orchestrator | 2026-01-08 01:01:01.820923 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-08 01:01:01.820929 | orchestrator | Thursday 08 January 2026 00:59:43 +0000 (0:00:02.027) 0:00:21.599 ****** 2026-01-08 01:01:01.820936 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820943 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.820950 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.820957 | orchestrator | 2026-01-08 01:01:01.820964 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-08 01:01:01.820971 | orchestrator | Thursday 08 January 2026 00:59:43 +0000 (0:00:00.389) 0:00:21.988 ****** 2026-01-08 01:01:01.820978 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.820985 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.820992 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.820999 | orchestrator | 2026-01-08 01:01:01.821006 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-08 01:01:01.821013 | orchestrator | Thursday 08 January 2026 00:59:44 +0000 (0:00:00.301) 0:00:22.289 ****** 2026-01-08 01:01:01.821020 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:01:01.821027 | orchestrator | 2026-01-08 01:01:01.821034 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-08 01:01:01.821040 | orchestrator | Thursday 08 January 2026 00:59:45 +0000 (0:00:00.770) 0:00:23.059 ****** 2026-01-08 01:01:01.821060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 01:01:01.821083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 01:01:01.821092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 01:01:01.821103 | orchestrator | 2026-01-08 01:01:01.821110 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-08 01:01:01.821115 | orchestrator | Thursday 08 January 2026 00:59:46 +0000 (0:00:01.647) 0:00:24.707 ****** 2026-01-08 01:01:01.821127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 01:01:01.821138 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.821145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 01:01:01.821152 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.821191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 01:01:01.821204 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.821211 | orchestrator | 2026-01-08 01:01:01.821218 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-08 01:01:01.821225 | orchestrator | Thursday 08 January 2026 00:59:47 +0000 (0:00:00.728) 0:00:25.436 ****** 2026-01-08 01:01:01.821233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 01:01:01.821240 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.821255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 01:01:01.821267 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.821275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 01:01:01.821282 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.821289 | orchestrator | 2026-01-08 01:01:01.821296 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-01-08 01:01:01.821308 | orchestrator | Thursday 08 January 2026 00:59:48 +0000 (0:00:00.841) 0:00:26.277 ****** 2026-01-08 01:01:01.821324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 01:01:01.821343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 01:01:01.821355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-08 01:01:01.821362 | orchestrator | 2026-01-08 01:01:01.821369 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-01-08 01:01:01.821376 | orchestrator | Thursday 08 January 2026 00:59:50 +0000 (0:00:01.849) 0:00:28.127 ****** 2026-01-08 01:01:01.821396 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:01:01.821404 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:01:01.821411 | orchestrator | } 2026-01-08 01:01:01.821418 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:01:01.821426 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:01:01.821434 | orchestrator | } 2026-01-08 01:01:01.821440 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:01:01.821447 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:01:01.821454 | orchestrator | } 2026-01-08 01:01:01.821461 | orchestrator | 2026-01-08 01:01:01.821468 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:01:01.821475 | orchestrator | Thursday 08 January 2026 00:59:50 +0000 (0:00:00.392) 0:00:28.520 ****** 2026-01-08 01:01:01.821491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 01:01:01.821502 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.821506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 01:01:01.821513 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.821523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2025.1', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-08 01:01:01.821527 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.821531 | orchestrator | 2026-01-08 01:01:01.821535 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-08 01:01:01.821539 | orchestrator | Thursday 08 January 2026 00:59:51 +0000 (0:00:00.877) 0:00:29.398 ****** 2026-01-08 01:01:01.821543 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:01.821547 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:01.821550 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:01.821554 | orchestrator | 2026-01-08 01:01:01.821558 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-08 01:01:01.821562 | orchestrator | Thursday 08 January 2026 00:59:51 +0000 (0:00:00.521) 0:00:29.919 ****** 2026-01-08 01:01:01.821610 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:01:01.821615 | orchestrator | 2026-01-08 01:01:01.821619 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-08 01:01:01.821623 | orchestrator | Thursday 08 January 2026 00:59:52 +0000 (0:00:00.571) 0:00:30.490 ****** 2026-01-08 01:01:01.821627 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:01.821631 | orchestrator | 2026-01-08 01:01:01.821635 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-08 01:01:01.821639 | orchestrator | Thursday 08 January 2026 00:59:55 +0000 (0:00:02.519) 0:00:33.010 ****** 2026-01-08 01:01:01.821643 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:01.821647 | orchestrator | 2026-01-08 01:01:01.821651 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-08 01:01:01.821658 | orchestrator | Thursday 08 January 2026 00:59:57 +0000 (0:00:02.298) 0:00:35.308 ****** 2026-01-08 01:01:01.821662 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:01.821665 | orchestrator | 2026-01-08 01:01:01.821669 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-08 01:01:01.821674 | orchestrator | Thursday 08 January 2026 01:00:12 +0000 (0:00:15.565) 0:00:50.873 ****** 2026-01-08 01:01:01.821677 | orchestrator | 2026-01-08 01:01:01.821681 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-08 01:01:01.821685 | orchestrator | Thursday 08 January 2026 01:00:12 +0000 (0:00:00.066) 0:00:50.940 ****** 2026-01-08 01:01:01.821689 | orchestrator | 2026-01-08 01:01:01.821693 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-08 01:01:01.821697 | orchestrator | Thursday 08 January 2026 01:00:13 +0000 (0:00:00.251) 0:00:51.192 ****** 2026-01-08 01:01:01.821701 | orchestrator | 2026-01-08 01:01:01.821704 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-08 01:01:01.821708 | orchestrator | Thursday 08 January 2026 01:00:13 +0000 (0:00:00.067) 0:00:51.259 ****** 2026-01-08 01:01:01.821714 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:01.821721 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:01:01.821727 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:01:01.821735 | orchestrator | 2026-01-08 01:01:01.821741 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:01:01.821748 | orchestrator | testbed-node-0 : ok=38  changed=12  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-01-08 01:01:01.821756 | orchestrator | testbed-node-1 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-08 01:01:01.821770 | orchestrator | testbed-node-2 : ok=35  changed=9  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-01-08 01:01:01.821778 | orchestrator | 2026-01-08 01:01:01.821785 | orchestrator | 2026-01-08 01:01:01.821792 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:01:01.821799 | orchestrator | Thursday 08 January 2026 01:01:00 +0000 (0:00:47.267) 0:01:38.526 ****** 2026-01-08 01:01:01.821806 | orchestrator | =============================================================================== 2026-01-08 01:01:01.821813 | orchestrator | horizon : Restart horizon container ------------------------------------ 47.27s 2026-01-08 01:01:01.821819 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.57s 2026-01-08 01:01:01.821825 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.52s 2026-01-08 01:01:01.821831 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.47s 2026-01-08 01:01:01.821838 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.30s 2026-01-08 01:01:01.821844 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.28s 2026-01-08 01:01:01.821851 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.03s 2026-01-08 01:01:01.821857 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.85s 2026-01-08 01:01:01.821865 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.85s 2026-01-08 01:01:01.821871 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.65s 2026-01-08 01:01:01.821878 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.25s 2026-01-08 01:01:01.821885 | orchestrator | service-check-containers : Include tasks -------------------------------- 0.88s 2026-01-08 01:01:01.821892 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.84s 2026-01-08 01:01:01.821899 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2026-01-08 01:01:01.821910 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2026-01-08 01:01:01.821917 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.73s 2026-01-08 01:01:01.821924 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2026-01-08 01:01:01.821931 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-01-08 01:01:01.821944 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-01-08 01:01:01.821951 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-01-08 01:01:01.821958 | orchestrator | 2026-01-08 01:01:01 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:01.828512 | orchestrator | 2026-01-08 01:01:01 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:01.828621 | orchestrator | 2026-01-08 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:04.874093 | orchestrator | 2026-01-08 01:01:04 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:04.876273 | orchestrator | 2026-01-08 01:01:04 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:04.876328 | orchestrator | 2026-01-08 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:07.933380 | orchestrator | 2026-01-08 01:01:07 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:07.935238 | orchestrator | 2026-01-08 01:01:07 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:07.935271 | orchestrator | 2026-01-08 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:10.980025 | orchestrator | 2026-01-08 01:01:10 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:10.981973 | orchestrator | 2026-01-08 01:01:10 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:10.982122 | orchestrator | 2026-01-08 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:14.023491 | orchestrator | 2026-01-08 01:01:14 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:14.024934 | orchestrator | 2026-01-08 01:01:14 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:14.024976 | orchestrator | 2026-01-08 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:17.072880 | orchestrator | 2026-01-08 01:01:17 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:17.075290 | orchestrator | 2026-01-08 01:01:17 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:17.075330 | orchestrator | 2026-01-08 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:20.123250 | orchestrator | 2026-01-08 01:01:20 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:20.124411 | orchestrator | 2026-01-08 01:01:20 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:20.124517 | orchestrator | 2026-01-08 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:23.178741 | orchestrator | 2026-01-08 01:01:23 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:23.180117 | orchestrator | 2026-01-08 01:01:23 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:23.180167 | orchestrator | 2026-01-08 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:26.229336 | orchestrator | 2026-01-08 01:01:26 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:26.231363 | orchestrator | 2026-01-08 01:01:26 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:26.231432 | orchestrator | 2026-01-08 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:29.276581 | orchestrator | 2026-01-08 01:01:29 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:29.278416 | orchestrator | 2026-01-08 01:01:29 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:29.278488 | orchestrator | 2026-01-08 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:32.327923 | orchestrator | 2026-01-08 01:01:32 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:32.329862 | orchestrator | 2026-01-08 01:01:32 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:32.330078 | orchestrator | 2026-01-08 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:35.369249 | orchestrator | 2026-01-08 01:01:35 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state STARTED 2026-01-08 01:01:35.370164 | orchestrator | 2026-01-08 01:01:35 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:35.370192 | orchestrator | 2026-01-08 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:38.420776 | orchestrator | 2026-01-08 01:01:38 | INFO  | Task 65e17fb3-a2f3-479d-a4dc-eb6e5b0d46ea is in state SUCCESS 2026-01-08 01:01:38.421197 | orchestrator | 2026-01-08 01:01:38 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state STARTED 2026-01-08 01:01:38.422737 | orchestrator | 2026-01-08 01:01:38 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:01:38.424075 | orchestrator | 2026-01-08 01:01:38 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:01:38.425190 | orchestrator | 2026-01-08 01:01:38 | INFO  | Task 3498c70b-7696-4356-b011-9bda669c0b16 is in state STARTED 2026-01-08 01:01:38.425221 | orchestrator | 2026-01-08 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:41.458122 | orchestrator | 2026-01-08 01:01:41 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:01:41.458473 | orchestrator | 2026-01-08 01:01:41 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:01:41.460698 | orchestrator | 2026-01-08 01:01:41 | INFO  | Task 5ed2db58-7354-47ee-8c3c-ccdd177b60ec is in state SUCCESS 2026-01-08 01:01:41.462718 | orchestrator | 2026-01-08 01:01:41.462776 | orchestrator | 2026-01-08 01:01:41.462786 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-08 01:01:41.462794 | orchestrator | 2026-01-08 01:01:41.462801 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-08 01:01:41.462808 | orchestrator | Thursday 08 January 2026 01:00:40 +0000 (0:00:00.232) 0:00:00.232 ****** 2026-01-08 01:01:41.462815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-08 01:01:41.462824 | orchestrator | 2026-01-08 01:01:41.462832 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-08 01:01:41.462839 | orchestrator | Thursday 08 January 2026 01:00:40 +0000 (0:00:00.257) 0:00:00.489 ****** 2026-01-08 01:01:41.462847 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-08 01:01:41.462853 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-08 01:01:41.462861 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-08 01:01:41.462893 | orchestrator | 2026-01-08 01:01:41.462901 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-08 01:01:41.462908 | orchestrator | Thursday 08 January 2026 01:00:41 +0000 (0:00:01.345) 0:00:01.835 ****** 2026-01-08 01:01:41.462916 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-08 01:01:41.462922 | orchestrator | 2026-01-08 01:01:41.462930 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-08 01:01:41.463000 | orchestrator | Thursday 08 January 2026 01:00:43 +0000 (0:00:01.443) 0:00:03.278 ****** 2026-01-08 01:01:41.463007 | orchestrator | changed: [testbed-manager] 2026-01-08 01:01:41.463011 | orchestrator | 2026-01-08 01:01:41.463015 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-08 01:01:41.463019 | orchestrator | Thursday 08 January 2026 01:00:44 +0000 (0:00:00.919) 0:00:04.198 ****** 2026-01-08 01:01:41.463023 | orchestrator | changed: [testbed-manager] 2026-01-08 01:01:41.463026 | orchestrator | 2026-01-08 01:01:41.463030 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-08 01:01:41.463034 | orchestrator | Thursday 08 January 2026 01:00:45 +0000 (0:00:00.946) 0:00:05.145 ****** 2026-01-08 01:01:41.463038 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-08 01:01:41.463213 | orchestrator | ok: [testbed-manager] 2026-01-08 01:01:41.463220 | orchestrator | 2026-01-08 01:01:41.463224 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-08 01:01:41.463228 | orchestrator | Thursday 08 January 2026 01:01:25 +0000 (0:00:40.443) 0:00:45.588 ****** 2026-01-08 01:01:41.463232 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-08 01:01:41.463236 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-08 01:01:41.463240 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-08 01:01:41.463244 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-08 01:01:41.463248 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-08 01:01:41.463252 | orchestrator | 2026-01-08 01:01:41.463255 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-08 01:01:41.463259 | orchestrator | Thursday 08 January 2026 01:01:29 +0000 (0:00:04.200) 0:00:49.789 ****** 2026-01-08 01:01:41.463263 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-08 01:01:41.463267 | orchestrator | 2026-01-08 01:01:41.463271 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-08 01:01:41.463274 | orchestrator | Thursday 08 January 2026 01:01:30 +0000 (0:00:00.455) 0:00:50.245 ****** 2026-01-08 01:01:41.463278 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:01:41.463282 | orchestrator | 2026-01-08 01:01:41.463286 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-08 01:01:41.463290 | orchestrator | Thursday 08 January 2026 01:01:30 +0000 (0:00:00.145) 0:00:50.391 ****** 2026-01-08 01:01:41.463294 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:01:41.463298 | orchestrator | 2026-01-08 01:01:41.463302 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-08 01:01:41.463306 | orchestrator | Thursday 08 January 2026 01:01:31 +0000 (0:00:00.505) 0:00:50.897 ****** 2026-01-08 01:01:41.463309 | orchestrator | changed: [testbed-manager] 2026-01-08 01:01:41.463313 | orchestrator | 2026-01-08 01:01:41.463317 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-08 01:01:41.463321 | orchestrator | Thursday 08 January 2026 01:01:32 +0000 (0:00:01.449) 0:00:52.347 ****** 2026-01-08 01:01:41.463325 | orchestrator | changed: [testbed-manager] 2026-01-08 01:01:41.463328 | orchestrator | 2026-01-08 01:01:41.463332 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-08 01:01:41.463336 | orchestrator | Thursday 08 January 2026 01:01:33 +0000 (0:00:00.726) 0:00:53.073 ****** 2026-01-08 01:01:41.463340 | orchestrator | changed: [testbed-manager] 2026-01-08 01:01:41.463344 | orchestrator | 2026-01-08 01:01:41.463355 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-08 01:01:41.463359 | orchestrator | Thursday 08 January 2026 01:01:33 +0000 (0:00:00.594) 0:00:53.667 ****** 2026-01-08 01:01:41.463363 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-08 01:01:41.463367 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-08 01:01:41.463394 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-08 01:01:41.463398 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-08 01:01:41.463402 | orchestrator | 2026-01-08 01:01:41.463406 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:01:41.463410 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:01:41.463415 | orchestrator | 2026-01-08 01:01:41.463419 | orchestrator | 2026-01-08 01:01:41.463433 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:01:41.463437 | orchestrator | Thursday 08 January 2026 01:01:35 +0000 (0:00:01.599) 0:00:55.267 ****** 2026-01-08 01:01:41.463441 | orchestrator | =============================================================================== 2026-01-08 01:01:41.463445 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.44s 2026-01-08 01:01:41.463449 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.20s 2026-01-08 01:01:41.463452 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.60s 2026-01-08 01:01:41.463456 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.45s 2026-01-08 01:01:41.463460 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.44s 2026-01-08 01:01:41.463464 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.35s 2026-01-08 01:01:41.463468 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-01-08 01:01:41.463471 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.92s 2026-01-08 01:01:41.463475 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2026-01-08 01:01:41.463479 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.59s 2026-01-08 01:01:41.463483 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.51s 2026-01-08 01:01:41.463486 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2026-01-08 01:01:41.463525 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-01-08 01:01:41.463531 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-01-08 01:01:41.463534 | orchestrator | 2026-01-08 01:01:41.463538 | orchestrator | 2026-01-08 01:01:41.463542 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:01:41.463546 | orchestrator | 2026-01-08 01:01:41.463550 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:01:41.463553 | orchestrator | Thursday 08 January 2026 00:59:22 +0000 (0:00:00.263) 0:00:00.263 ****** 2026-01-08 01:01:41.463557 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:41.463561 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:41.463565 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:41.463569 | orchestrator | 2026-01-08 01:01:41.463673 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:01:41.463679 | orchestrator | Thursday 08 January 2026 00:59:22 +0000 (0:00:00.287) 0:00:00.550 ****** 2026-01-08 01:01:41.463683 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-08 01:01:41.463687 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-08 01:01:41.463691 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-08 01:01:41.463694 | orchestrator | 2026-01-08 01:01:41.463698 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-08 01:01:41.463702 | orchestrator | 2026-01-08 01:01:41.463706 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-08 01:01:41.463716 | orchestrator | Thursday 08 January 2026 00:59:22 +0000 (0:00:00.450) 0:00:01.001 ****** 2026-01-08 01:01:41.463720 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:01:41.463725 | orchestrator | 2026-01-08 01:01:41.463728 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-08 01:01:41.463732 | orchestrator | Thursday 08 January 2026 00:59:23 +0000 (0:00:00.617) 0:00:01.618 ****** 2026-01-08 01:01:41.463741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.463766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.463776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.463781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.463791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.463795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.463799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.463817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.463821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.463825 | orchestrator | 2026-01-08 01:01:41.463832 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-08 01:01:41.463836 | orchestrator | Thursday 08 January 2026 00:59:25 +0000 (0:00:01.830) 0:00:03.449 ****** 2026-01-08 01:01:41.463840 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.463844 | orchestrator | 2026-01-08 01:01:41.463849 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-08 01:01:41.463852 | orchestrator | Thursday 08 January 2026 00:59:25 +0000 (0:00:00.151) 0:00:03.600 ****** 2026-01-08 01:01:41.463860 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.463864 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.463869 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.463875 | orchestrator | 2026-01-08 01:01:41.463881 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-08 01:01:41.463887 | orchestrator | Thursday 08 January 2026 00:59:26 +0000 (0:00:00.502) 0:00:04.103 ****** 2026-01-08 01:01:41.463893 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:01:41.463899 | orchestrator | 2026-01-08 01:01:41.463905 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-08 01:01:41.463911 | orchestrator | Thursday 08 January 2026 00:59:26 +0000 (0:00:00.840) 0:00:04.944 ****** 2026-01-08 01:01:41.463917 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:01:41.463924 | orchestrator | 2026-01-08 01:01:41.463930 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-08 01:01:41.463936 | orchestrator | Thursday 08 January 2026 00:59:27 +0000 (0:00:00.560) 0:00:05.504 ****** 2026-01-08 01:01:41.463942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.463972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.463985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.463998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464060 | orchestrator | 2026-01-08 01:01:41.464066 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-08 01:01:41.464072 | orchestrator | Thursday 08 January 2026 00:59:31 +0000 (0:00:03.685) 0:00:09.190 ****** 2026-01-08 01:01:41.464084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.464090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.464102 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.464114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.464121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.464148 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.464155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.464163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.464177 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.464183 | orchestrator | 2026-01-08 01:01:41.464196 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-08 01:01:41.464203 | orchestrator | Thursday 08 January 2026 00:59:31 +0000 (0:00:00.623) 0:00:09.813 ****** 2026-01-08 01:01:41.464210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.464227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.464240 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.464247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.464258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.464276 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.464287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.464295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.464308 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.464316 | orchestrator | 2026-01-08 01:01:41.464322 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-08 01:01:41.464330 | orchestrator | Thursday 08 January 2026 00:59:32 +0000 (0:00:00.831) 0:00:10.645 ****** 2026-01-08 01:01:41.464343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.464359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.464366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.464374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464429 | orchestrator | 2026-01-08 01:01:41.464435 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-08 01:01:41.464442 | orchestrator | Thursday 08 January 2026 00:59:36 +0000 (0:00:03.569) 0:00:14.214 ****** 2026-01-08 01:01:41.464449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.464456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.464486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.464530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.464568 | orchestrator | 2026-01-08 01:01:41.464575 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-08 01:01:41.464581 | orchestrator | Thursday 08 January 2026 00:59:42 +0000 (0:00:05.834) 0:00:20.049 ****** 2026-01-08 01:01:41.464588 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:41.464596 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:01:41.464602 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:01:41.464609 | orchestrator | 2026-01-08 01:01:41.464615 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-08 01:01:41.464622 | orchestrator | Thursday 08 January 2026 00:59:43 +0000 (0:00:01.396) 0:00:21.445 ****** 2026-01-08 01:01:41.464629 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.464636 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.464646 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.464655 | orchestrator | 2026-01-08 01:01:41.464660 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-08 01:01:41.464665 | orchestrator | Thursday 08 January 2026 00:59:44 +0000 (0:00:00.599) 0:00:22.045 ****** 2026-01-08 01:01:41.464669 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.464674 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.464678 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.464683 | orchestrator | 2026-01-08 01:01:41.464687 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-08 01:01:41.464692 | orchestrator | Thursday 08 January 2026 00:59:44 +0000 (0:00:00.280) 0:00:22.326 ****** 2026-01-08 01:01:41.464697 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.464702 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.464707 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.464711 | orchestrator | 2026-01-08 01:01:41.464716 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-08 01:01:41.464721 | orchestrator | Thursday 08 January 2026 00:59:44 +0000 (0:00:00.499) 0:00:22.826 ****** 2026-01-08 01:01:41.464727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.464736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.464751 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.464758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.464764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.464894 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.464906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.464919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.464926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.464932 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.464938 | orchestrator | 2026-01-08 01:01:41.464945 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-08 01:01:41.464953 | orchestrator | Thursday 08 January 2026 00:59:45 +0000 (0:00:00.683) 0:00:23.509 ****** 2026-01-08 01:01:41.464964 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.464971 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.464979 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.464985 | orchestrator | 2026-01-08 01:01:41.464992 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-08 01:01:41.464998 | orchestrator | Thursday 08 January 2026 00:59:45 +0000 (0:00:00.288) 0:00:23.798 ****** 2026-01-08 01:01:41.465011 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-08 01:01:41.465019 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-08 01:01:41.465026 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-08 01:01:41.465032 | orchestrator | 2026-01-08 01:01:41.465039 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-08 01:01:41.465045 | orchestrator | Thursday 08 January 2026 00:59:47 +0000 (0:00:01.852) 0:00:25.650 ****** 2026-01-08 01:01:41.465051 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:01:41.465058 | orchestrator | 2026-01-08 01:01:41.465071 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-08 01:01:41.465078 | orchestrator | Thursday 08 January 2026 00:59:48 +0000 (0:00:00.951) 0:00:26.601 ****** 2026-01-08 01:01:41.465086 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.465092 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.465100 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.465107 | orchestrator | 2026-01-08 01:01:41.465115 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-08 01:01:41.465121 | orchestrator | Thursday 08 January 2026 00:59:49 +0000 (0:00:01.008) 0:00:27.610 ****** 2026-01-08 01:01:41.465128 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:01:41.465136 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-08 01:01:41.465141 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-08 01:01:41.465145 | orchestrator | 2026-01-08 01:01:41.465150 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-08 01:01:41.465155 | orchestrator | Thursday 08 January 2026 00:59:50 +0000 (0:00:01.168) 0:00:28.779 ****** 2026-01-08 01:01:41.465159 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:41.465165 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:41.465169 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:41.465174 | orchestrator | 2026-01-08 01:01:41.465179 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-08 01:01:41.465183 | orchestrator | Thursday 08 January 2026 00:59:51 +0000 (0:00:00.364) 0:00:29.143 ****** 2026-01-08 01:01:41.465188 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-08 01:01:41.465193 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-08 01:01:41.465198 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-08 01:01:41.465202 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-08 01:01:41.465207 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-08 01:01:41.465212 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-08 01:01:41.465217 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-08 01:01:41.465223 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-08 01:01:41.465228 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-08 01:01:41.465232 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-08 01:01:41.465237 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-08 01:01:41.465242 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-08 01:01:41.465247 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-08 01:01:41.465258 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-08 01:01:41.465264 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-08 01:01:41.465270 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-08 01:01:41.465276 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-08 01:01:41.465282 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-08 01:01:41.465289 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-08 01:01:41.465295 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-08 01:01:41.465309 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-08 01:01:41.465315 | orchestrator | 2026-01-08 01:01:41.465320 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-08 01:01:41.465325 | orchestrator | Thursday 08 January 2026 01:00:00 +0000 (0:00:09.751) 0:00:38.895 ****** 2026-01-08 01:01:41.465330 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-08 01:01:41.465334 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-08 01:01:41.465339 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-08 01:01:41.465347 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-08 01:01:41.465352 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-08 01:01:41.465356 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-08 01:01:41.465361 | orchestrator | 2026-01-08 01:01:41.465366 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-01-08 01:01:41.465370 | orchestrator | Thursday 08 January 2026 01:00:03 +0000 (0:00:03.011) 0:00:41.906 ****** 2026-01-08 01:01:41.465377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.465383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.465395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-08 01:01:41.465407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.465413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.465417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-08 01:01:41.465422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.465427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.465436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-08 01:01:41.465445 | orchestrator | 2026-01-08 01:01:41.465450 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-01-08 01:01:41.465455 | orchestrator | Thursday 08 January 2026 01:00:06 +0000 (0:00:02.319) 0:00:44.225 ****** 2026-01-08 01:01:41.465459 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:01:41.465464 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:01:41.465468 | orchestrator | } 2026-01-08 01:01:41.465473 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:01:41.465477 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:01:41.465481 | orchestrator | } 2026-01-08 01:01:41.465486 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:01:41.465491 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:01:41.465585 | orchestrator | } 2026-01-08 01:01:41.465590 | orchestrator | 2026-01-08 01:01:41.465594 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:01:41.465599 | orchestrator | Thursday 08 January 2026 01:00:06 +0000 (0:00:00.315) 0:00:44.541 ****** 2026-01-08 01:01:41.465608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.465613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.465619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.465761 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.465788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.465805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.465818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.465825 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.465833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2025.1', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-08 01:01:41.465841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2025.1', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-08 01:01:41.465849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2025.1', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-08 01:01:41.465861 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.465869 | orchestrator | 2026-01-08 01:01:41.465876 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-08 01:01:41.465884 | orchestrator | Thursday 08 January 2026 01:00:07 +0000 (0:00:01.023) 0:00:45.564 ****** 2026-01-08 01:01:41.465891 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.465898 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.465904 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.465909 | orchestrator | 2026-01-08 01:01:41.465918 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-08 01:01:41.465923 | orchestrator | Thursday 08 January 2026 01:00:07 +0000 (0:00:00.305) 0:00:45.870 ****** 2026-01-08 01:01:41.465927 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:41.465932 | orchestrator | 2026-01-08 01:01:41.465937 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-08 01:01:41.465942 | orchestrator | Thursday 08 January 2026 01:00:10 +0000 (0:00:02.221) 0:00:48.091 ****** 2026-01-08 01:01:41.465947 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:41.465951 | orchestrator | 2026-01-08 01:01:41.465956 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-08 01:01:41.465961 | orchestrator | Thursday 08 January 2026 01:00:12 +0000 (0:00:02.107) 0:00:50.199 ****** 2026-01-08 01:01:41.465965 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:41.465971 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:41.465975 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:41.465980 | orchestrator | 2026-01-08 01:01:41.465985 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-08 01:01:41.465990 | orchestrator | Thursday 08 January 2026 01:00:13 +0000 (0:00:00.987) 0:00:51.187 ****** 2026-01-08 01:01:41.465994 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:41.465999 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:41.466004 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:41.466008 | orchestrator | 2026-01-08 01:01:41.466051 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-08 01:01:41.466059 | orchestrator | Thursday 08 January 2026 01:00:13 +0000 (0:00:00.349) 0:00:51.536 ****** 2026-01-08 01:01:41.466063 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.466068 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.466073 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.466078 | orchestrator | 2026-01-08 01:01:41.466087 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-08 01:01:41.466094 | orchestrator | Thursday 08 January 2026 01:00:14 +0000 (0:00:00.729) 0:00:52.266 ****** 2026-01-08 01:01:41.466101 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:41.466108 | orchestrator | 2026-01-08 01:01:41.466114 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-08 01:01:41.466125 | orchestrator | Thursday 08 January 2026 01:00:27 +0000 (0:00:13.697) 0:01:05.963 ****** 2026-01-08 01:01:41.466133 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:41.466142 | orchestrator | 2026-01-08 01:01:41.466148 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-08 01:01:41.466154 | orchestrator | Thursday 08 January 2026 01:00:38 +0000 (0:00:10.380) 0:01:16.343 ****** 2026-01-08 01:01:41.466160 | orchestrator | 2026-01-08 01:01:41.466166 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-08 01:01:41.466171 | orchestrator | Thursday 08 January 2026 01:00:38 +0000 (0:00:00.064) 0:01:16.408 ****** 2026-01-08 01:01:41.466185 | orchestrator | 2026-01-08 01:01:41.466192 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-08 01:01:41.466198 | orchestrator | Thursday 08 January 2026 01:00:38 +0000 (0:00:00.073) 0:01:16.482 ****** 2026-01-08 01:01:41.466204 | orchestrator | 2026-01-08 01:01:41.466210 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-08 01:01:41.466216 | orchestrator | Thursday 08 January 2026 01:00:38 +0000 (0:00:00.070) 0:01:16.552 ****** 2026-01-08 01:01:41.466222 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:41.466230 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:01:41.466237 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:01:41.466242 | orchestrator | 2026-01-08 01:01:41.466248 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-08 01:01:41.466255 | orchestrator | Thursday 08 January 2026 01:00:48 +0000 (0:00:09.510) 0:01:26.063 ****** 2026-01-08 01:01:41.466261 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:01:41.466267 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:41.466273 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:01:41.466279 | orchestrator | 2026-01-08 01:01:41.466285 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-08 01:01:41.466291 | orchestrator | Thursday 08 January 2026 01:00:58 +0000 (0:00:10.395) 0:01:36.459 ****** 2026-01-08 01:01:41.466297 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:01:41.466303 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:01:41.466309 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:41.466315 | orchestrator | 2026-01-08 01:01:41.466321 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-08 01:01:41.466328 | orchestrator | Thursday 08 January 2026 01:01:05 +0000 (0:00:07.463) 0:01:43.923 ****** 2026-01-08 01:01:41.466334 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:01:41.466341 | orchestrator | 2026-01-08 01:01:41.466348 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-08 01:01:41.466354 | orchestrator | Thursday 08 January 2026 01:01:06 +0000 (0:00:00.707) 0:01:44.630 ****** 2026-01-08 01:01:41.466361 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:01:41.466368 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:41.466375 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:01:41.466382 | orchestrator | 2026-01-08 01:01:41.466388 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-08 01:01:41.466395 | orchestrator | Thursday 08 January 2026 01:01:07 +0000 (0:00:01.111) 0:01:45.742 ****** 2026-01-08 01:01:41.466401 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:01:41.466408 | orchestrator | 2026-01-08 01:01:41.466414 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-08 01:01:41.466422 | orchestrator | Thursday 08 January 2026 01:01:09 +0000 (0:00:01.660) 0:01:47.403 ****** 2026-01-08 01:01:41.466428 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-08 01:01:41.466435 | orchestrator | 2026-01-08 01:01:41.466441 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting services] ************* 2026-01-08 01:01:41.466447 | orchestrator | Thursday 08 January 2026 01:01:22 +0000 (0:00:13.284) 0:02:00.688 ****** 2026-01-08 01:01:41.466453 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-08 01:01:41.466460 | orchestrator | 2026-01-08 01:01:41.466475 | orchestrator | TASK [service-ks-register : keystone | Creating/deleting endpoints] ************ 2026-01-08 01:01:41.466481 | orchestrator | Thursday 08 January 2026 01:01:27 +0000 (0:00:05.298) 0:02:05.986 ****** 2026-01-08 01:01:41.466488 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-08 01:01:41.466544 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-08 01:01:41.466552 | orchestrator | 2026-01-08 01:01:41.466557 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-08 01:01:41.466576 | orchestrator | Thursday 08 January 2026 01:01:34 +0000 (0:00:06.135) 0:02:12.122 ****** 2026-01-08 01:01:41.466582 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.466588 | orchestrator | 2026-01-08 01:01:41.466594 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-08 01:01:41.466600 | orchestrator | Thursday 08 January 2026 01:01:34 +0000 (0:00:00.140) 0:02:12.262 ****** 2026-01-08 01:01:41.466606 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.466612 | orchestrator | 2026-01-08 01:01:41.466621 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-08 01:01:41.466628 | orchestrator | Thursday 08 January 2026 01:01:34 +0000 (0:00:00.111) 0:02:12.373 ****** 2026-01-08 01:01:41.466634 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.466640 | orchestrator | 2026-01-08 01:01:41.466646 | orchestrator | TASK [service-ks-register : keystone | Granting/revoking user roles] *********** 2026-01-08 01:01:41.466652 | orchestrator | Thursday 08 January 2026 01:01:34 +0000 (0:00:00.126) 0:02:12.500 ****** 2026-01-08 01:01:41.466658 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.466663 | orchestrator | 2026-01-08 01:01:41.466680 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-08 01:01:41.466686 | orchestrator | Thursday 08 January 2026 01:01:34 +0000 (0:00:00.339) 0:02:12.840 ****** 2026-01-08 01:01:41.466691 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:01:41.466697 | orchestrator | 2026-01-08 01:01:41.466703 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-08 01:01:41.466709 | orchestrator | Thursday 08 January 2026 01:01:38 +0000 (0:00:04.165) 0:02:17.005 ****** 2026-01-08 01:01:41.466715 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:01:41.466721 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:01:41.466728 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:01:41.466735 | orchestrator | 2026-01-08 01:01:41.466742 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:01:41.466749 | orchestrator | testbed-node-0 : ok=34  changed=20  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-01-08 01:01:41.466757 | orchestrator | testbed-node-1 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-08 01:01:41.466763 | orchestrator | testbed-node-2 : ok=23  changed=13  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-08 01:01:41.466769 | orchestrator | 2026-01-08 01:01:41.466776 | orchestrator | 2026-01-08 01:01:41.466782 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:01:41.466788 | orchestrator | Thursday 08 January 2026 01:01:39 +0000 (0:00:00.680) 0:02:17.686 ****** 2026-01-08 01:01:41.466794 | orchestrator | =============================================================================== 2026-01-08 01:01:41.466800 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.70s 2026-01-08 01:01:41.466807 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.28s 2026-01-08 01:01:41.466813 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.40s 2026-01-08 01:01:41.466819 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.38s 2026-01-08 01:01:41.466826 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.75s 2026-01-08 01:01:41.466831 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.51s 2026-01-08 01:01:41.466835 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.46s 2026-01-08 01:01:41.466838 | orchestrator | service-ks-register : keystone | Creating/deleting endpoints ------------ 6.14s 2026-01-08 01:01:41.466842 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.83s 2026-01-08 01:01:41.466846 | orchestrator | service-ks-register : keystone | Creating/deleting services ------------- 5.30s 2026-01-08 01:01:41.466850 | orchestrator | keystone : Creating default user role ----------------------------------- 4.17s 2026-01-08 01:01:41.466859 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.69s 2026-01-08 01:01:41.466863 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.57s 2026-01-08 01:01:41.466867 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.01s 2026-01-08 01:01:41.466871 | orchestrator | service-check-containers : keystone | Check containers ------------------ 2.32s 2026-01-08 01:01:41.466875 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.22s 2026-01-08 01:01:41.466879 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.11s 2026-01-08 01:01:41.466883 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.85s 2026-01-08 01:01:41.466887 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.83s 2026-01-08 01:01:41.466891 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.66s 2026-01-08 01:01:41.466901 | orchestrator | 2026-01-08 01:01:41 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:01:41.466905 | orchestrator | 2026-01-08 01:01:41 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:01:41.466909 | orchestrator | 2026-01-08 01:01:41 | INFO  | Task 3498c70b-7696-4356-b011-9bda669c0b16 is in state STARTED 2026-01-08 01:01:41.466913 | orchestrator | 2026-01-08 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:44.515354 | orchestrator | 2026-01-08 01:01:44 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:01:44.515411 | orchestrator | 2026-01-08 01:01:44 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:01:44.515418 | orchestrator | 2026-01-08 01:01:44 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:01:44.515423 | orchestrator | 2026-01-08 01:01:44 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:01:44.515428 | orchestrator | 2026-01-08 01:01:44 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:01:44.515433 | orchestrator | 2026-01-08 01:01:44 | INFO  | Task 3498c70b-7696-4356-b011-9bda669c0b16 is in state SUCCESS 2026-01-08 01:01:44.515459 | orchestrator | 2026-01-08 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:47.551235 | orchestrator | 2026-01-08 01:01:47 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:01:47.551299 | orchestrator | 2026-01-08 01:01:47 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:01:47.551308 | orchestrator | 2026-01-08 01:01:47 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:01:47.551317 | orchestrator | 2026-01-08 01:01:47 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:01:47.551326 | orchestrator | 2026-01-08 01:01:47 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:01:47.551336 | orchestrator | 2026-01-08 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:50.573679 | orchestrator | 2026-01-08 01:01:50 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:01:50.573801 | orchestrator | 2026-01-08 01:01:50 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:01:50.573812 | orchestrator | 2026-01-08 01:01:50 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:01:50.573818 | orchestrator | 2026-01-08 01:01:50 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:01:50.577718 | orchestrator | 2026-01-08 01:01:50 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:01:50.579239 | orchestrator | 2026-01-08 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:53.626219 | orchestrator | 2026-01-08 01:01:53 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:01:53.626324 | orchestrator | 2026-01-08 01:01:53 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:01:53.627202 | orchestrator | 2026-01-08 01:01:53 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:01:53.627975 | orchestrator | 2026-01-08 01:01:53 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:01:53.629742 | orchestrator | 2026-01-08 01:01:53 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:01:53.629773 | orchestrator | 2026-01-08 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:56.678720 | orchestrator | 2026-01-08 01:01:56 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:01:56.678920 | orchestrator | 2026-01-08 01:01:56 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:01:56.680918 | orchestrator | 2026-01-08 01:01:56 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:01:56.680972 | orchestrator | 2026-01-08 01:01:56 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:01:56.687709 | orchestrator | 2026-01-08 01:01:56 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:01:56.687764 | orchestrator | 2026-01-08 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:01:59.718163 | orchestrator | 2026-01-08 01:01:59 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:01:59.719054 | orchestrator | 2026-01-08 01:01:59 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:01:59.719735 | orchestrator | 2026-01-08 01:01:59 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:01:59.720755 | orchestrator | 2026-01-08 01:01:59 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:01:59.722919 | orchestrator | 2026-01-08 01:01:59 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:01:59.722951 | orchestrator | 2026-01-08 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:02.766960 | orchestrator | 2026-01-08 01:02:02 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:02.767658 | orchestrator | 2026-01-08 01:02:02 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:02.768379 | orchestrator | 2026-01-08 01:02:02 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:02.770292 | orchestrator | 2026-01-08 01:02:02 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:02.770323 | orchestrator | 2026-01-08 01:02:02 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:02.770337 | orchestrator | 2026-01-08 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:05.801249 | orchestrator | 2026-01-08 01:02:05 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:05.801983 | orchestrator | 2026-01-08 01:02:05 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:05.803678 | orchestrator | 2026-01-08 01:02:05 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:05.804675 | orchestrator | 2026-01-08 01:02:05 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:05.805688 | orchestrator | 2026-01-08 01:02:05 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:05.805720 | orchestrator | 2026-01-08 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:08.850837 | orchestrator | 2026-01-08 01:02:08 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:08.852241 | orchestrator | 2026-01-08 01:02:08 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:08.854352 | orchestrator | 2026-01-08 01:02:08 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:08.856141 | orchestrator | 2026-01-08 01:02:08 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:08.858079 | orchestrator | 2026-01-08 01:02:08 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:08.858131 | orchestrator | 2026-01-08 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:11.899205 | orchestrator | 2026-01-08 01:02:11 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:11.900487 | orchestrator | 2026-01-08 01:02:11 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:11.901519 | orchestrator | 2026-01-08 01:02:11 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:11.902723 | orchestrator | 2026-01-08 01:02:11 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:11.905211 | orchestrator | 2026-01-08 01:02:11 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:11.905246 | orchestrator | 2026-01-08 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:14.954211 | orchestrator | 2026-01-08 01:02:14 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:14.957904 | orchestrator | 2026-01-08 01:02:14 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:14.959370 | orchestrator | 2026-01-08 01:02:14 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:14.962216 | orchestrator | 2026-01-08 01:02:14 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:14.965430 | orchestrator | 2026-01-08 01:02:14 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:14.965489 | orchestrator | 2026-01-08 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:18.021532 | orchestrator | 2026-01-08 01:02:18 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:18.023735 | orchestrator | 2026-01-08 01:02:18 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:18.025446 | orchestrator | 2026-01-08 01:02:18 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:18.027255 | orchestrator | 2026-01-08 01:02:18 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:18.028476 | orchestrator | 2026-01-08 01:02:18 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:18.028751 | orchestrator | 2026-01-08 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:21.077173 | orchestrator | 2026-01-08 01:02:21 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:21.078224 | orchestrator | 2026-01-08 01:02:21 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:21.078840 | orchestrator | 2026-01-08 01:02:21 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:21.079704 | orchestrator | 2026-01-08 01:02:21 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:21.080499 | orchestrator | 2026-01-08 01:02:21 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:21.080526 | orchestrator | 2026-01-08 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:24.129835 | orchestrator | 2026-01-08 01:02:24 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:24.130648 | orchestrator | 2026-01-08 01:02:24 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:24.133021 | orchestrator | 2026-01-08 01:02:24 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:24.134072 | orchestrator | 2026-01-08 01:02:24 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:24.135482 | orchestrator | 2026-01-08 01:02:24 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:24.135523 | orchestrator | 2026-01-08 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:27.175017 | orchestrator | 2026-01-08 01:02:27 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:27.176055 | orchestrator | 2026-01-08 01:02:27 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:27.177677 | orchestrator | 2026-01-08 01:02:27 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:27.179211 | orchestrator | 2026-01-08 01:02:27 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:27.181259 | orchestrator | 2026-01-08 01:02:27 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:27.181301 | orchestrator | 2026-01-08 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:30.212151 | orchestrator | 2026-01-08 01:02:30 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:30.212227 | orchestrator | 2026-01-08 01:02:30 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:30.212781 | orchestrator | 2026-01-08 01:02:30 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:30.213141 | orchestrator | 2026-01-08 01:02:30 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:30.213738 | orchestrator | 2026-01-08 01:02:30 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:30.213788 | orchestrator | 2026-01-08 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:33.237252 | orchestrator | 2026-01-08 01:02:33 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:33.237302 | orchestrator | 2026-01-08 01:02:33 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:33.238803 | orchestrator | 2026-01-08 01:02:33 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:33.239425 | orchestrator | 2026-01-08 01:02:33 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:33.240072 | orchestrator | 2026-01-08 01:02:33 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:33.240108 | orchestrator | 2026-01-08 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:36.264410 | orchestrator | 2026-01-08 01:02:36 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:36.264517 | orchestrator | 2026-01-08 01:02:36 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:36.264760 | orchestrator | 2026-01-08 01:02:36 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:36.265458 | orchestrator | 2026-01-08 01:02:36 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:36.266212 | orchestrator | 2026-01-08 01:02:36 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:36.266251 | orchestrator | 2026-01-08 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:39.302167 | orchestrator | 2026-01-08 01:02:39 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:39.302236 | orchestrator | 2026-01-08 01:02:39 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:39.302665 | orchestrator | 2026-01-08 01:02:39 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:39.303690 | orchestrator | 2026-01-08 01:02:39 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:39.304132 | orchestrator | 2026-01-08 01:02:39 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:39.304147 | orchestrator | 2026-01-08 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:42.333594 | orchestrator | 2026-01-08 01:02:42 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:42.335539 | orchestrator | 2026-01-08 01:02:42 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:42.335605 | orchestrator | 2026-01-08 01:02:42 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:42.335616 | orchestrator | 2026-01-08 01:02:42 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:42.340370 | orchestrator | 2026-01-08 01:02:42 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:42.340426 | orchestrator | 2026-01-08 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:45.354341 | orchestrator | 2026-01-08 01:02:45 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:45.355010 | orchestrator | 2026-01-08 01:02:45 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:45.355120 | orchestrator | 2026-01-08 01:02:45 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:45.356256 | orchestrator | 2026-01-08 01:02:45 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:45.357867 | orchestrator | 2026-01-08 01:02:45 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:45.357938 | orchestrator | 2026-01-08 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:48.381189 | orchestrator | 2026-01-08 01:02:48 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:48.381476 | orchestrator | 2026-01-08 01:02:48 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:48.382799 | orchestrator | 2026-01-08 01:02:48 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:48.383258 | orchestrator | 2026-01-08 01:02:48 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:48.383918 | orchestrator | 2026-01-08 01:02:48 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:48.383943 | orchestrator | 2026-01-08 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:51.422751 | orchestrator | 2026-01-08 01:02:51 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:51.423081 | orchestrator | 2026-01-08 01:02:51 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:51.426897 | orchestrator | 2026-01-08 01:02:51 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:51.427307 | orchestrator | 2026-01-08 01:02:51 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:51.427839 | orchestrator | 2026-01-08 01:02:51 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:51.427858 | orchestrator | 2026-01-08 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:54.461432 | orchestrator | 2026-01-08 01:02:54 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:54.461802 | orchestrator | 2026-01-08 01:02:54 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:54.462483 | orchestrator | 2026-01-08 01:02:54 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:54.463045 | orchestrator | 2026-01-08 01:02:54 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:54.464746 | orchestrator | 2026-01-08 01:02:54 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:54.464874 | orchestrator | 2026-01-08 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:02:57.487162 | orchestrator | 2026-01-08 01:02:57 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:02:57.488261 | orchestrator | 2026-01-08 01:02:57 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:02:57.489548 | orchestrator | 2026-01-08 01:02:57 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:02:57.491370 | orchestrator | 2026-01-08 01:02:57 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:02:57.491966 | orchestrator | 2026-01-08 01:02:57 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:02:57.492046 | orchestrator | 2026-01-08 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:00.512819 | orchestrator | 2026-01-08 01:03:00 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:00.513344 | orchestrator | 2026-01-08 01:03:00 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:00.513875 | orchestrator | 2026-01-08 01:03:00 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:00.514643 | orchestrator | 2026-01-08 01:03:00 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:03:00.515457 | orchestrator | 2026-01-08 01:03:00 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:00.515492 | orchestrator | 2026-01-08 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:03.557402 | orchestrator | 2026-01-08 01:03:03 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:03.557757 | orchestrator | 2026-01-08 01:03:03 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:03.558370 | orchestrator | 2026-01-08 01:03:03 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:03.558985 | orchestrator | 2026-01-08 01:03:03 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:03:03.559563 | orchestrator | 2026-01-08 01:03:03 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:03.559576 | orchestrator | 2026-01-08 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:06.589513 | orchestrator | 2026-01-08 01:03:06 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:06.589973 | orchestrator | 2026-01-08 01:03:06 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:06.590742 | orchestrator | 2026-01-08 01:03:06 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:06.591284 | orchestrator | 2026-01-08 01:03:06 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:03:06.592094 | orchestrator | 2026-01-08 01:03:06 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:06.592123 | orchestrator | 2026-01-08 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:09.621210 | orchestrator | 2026-01-08 01:03:09 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:09.621424 | orchestrator | 2026-01-08 01:03:09 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:09.622240 | orchestrator | 2026-01-08 01:03:09 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:09.622811 | orchestrator | 2026-01-08 01:03:09 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:03:09.623831 | orchestrator | 2026-01-08 01:03:09 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:09.623886 | orchestrator | 2026-01-08 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:12.653013 | orchestrator | 2026-01-08 01:03:12 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:12.653160 | orchestrator | 2026-01-08 01:03:12 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:12.653168 | orchestrator | 2026-01-08 01:03:12 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:12.653185 | orchestrator | 2026-01-08 01:03:12 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state STARTED 2026-01-08 01:03:12.653189 | orchestrator | 2026-01-08 01:03:12 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:12.653260 | orchestrator | 2026-01-08 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:15.677572 | orchestrator | 2026-01-08 01:03:15 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:15.678323 | orchestrator | 2026-01-08 01:03:15 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:15.679291 | orchestrator | 2026-01-08 01:03:15 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:15.680012 | orchestrator | 2026-01-08 01:03:15 | INFO  | Task 5b84f7c2-4a47-4a1f-8aa8-f18a59faaecc is in state SUCCESS 2026-01-08 01:03:15.681068 | orchestrator | 2026-01-08 01:03:15 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:15.681173 | orchestrator | 2026-01-08 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:18.717266 | orchestrator | 2026-01-08 01:03:18 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:18.718957 | orchestrator | 2026-01-08 01:03:18 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:18.721177 | orchestrator | 2026-01-08 01:03:18 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:18.723742 | orchestrator | 2026-01-08 01:03:18 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:18.723792 | orchestrator | 2026-01-08 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:21.754674 | orchestrator | 2026-01-08 01:03:21 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:21.755601 | orchestrator | 2026-01-08 01:03:21 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:21.757079 | orchestrator | 2026-01-08 01:03:21 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:21.764357 | orchestrator | 2026-01-08 01:03:21 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:21.764399 | orchestrator | 2026-01-08 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:24.807266 | orchestrator | 2026-01-08 01:03:24 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:24.809542 | orchestrator | 2026-01-08 01:03:24 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:24.810531 | orchestrator | 2026-01-08 01:03:24 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:24.812156 | orchestrator | 2026-01-08 01:03:24 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:24.812188 | orchestrator | 2026-01-08 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:27.845827 | orchestrator | 2026-01-08 01:03:27 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:27.846110 | orchestrator | 2026-01-08 01:03:27 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:27.846773 | orchestrator | 2026-01-08 01:03:27 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:27.848323 | orchestrator | 2026-01-08 01:03:27 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:27.848351 | orchestrator | 2026-01-08 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:30.876489 | orchestrator | 2026-01-08 01:03:30 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:30.876856 | orchestrator | 2026-01-08 01:03:30 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:30.877447 | orchestrator | 2026-01-08 01:03:30 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:30.878002 | orchestrator | 2026-01-08 01:03:30 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:30.878061 | orchestrator | 2026-01-08 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:33.909506 | orchestrator | 2026-01-08 01:03:33 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:33.909571 | orchestrator | 2026-01-08 01:03:33 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:33.910081 | orchestrator | 2026-01-08 01:03:33 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:33.911136 | orchestrator | 2026-01-08 01:03:33 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:33.911175 | orchestrator | 2026-01-08 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:36.941387 | orchestrator | 2026-01-08 01:03:36 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:36.941900 | orchestrator | 2026-01-08 01:03:36 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:36.942716 | orchestrator | 2026-01-08 01:03:36 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:36.943397 | orchestrator | 2026-01-08 01:03:36 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:36.943430 | orchestrator | 2026-01-08 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:39.984649 | orchestrator | 2026-01-08 01:03:39 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:39.985894 | orchestrator | 2026-01-08 01:03:39 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:39.986853 | orchestrator | 2026-01-08 01:03:39 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:39.987691 | orchestrator | 2026-01-08 01:03:39 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:39.987729 | orchestrator | 2026-01-08 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:43.028468 | orchestrator | 2026-01-08 01:03:43 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:43.029070 | orchestrator | 2026-01-08 01:03:43 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:43.030154 | orchestrator | 2026-01-08 01:03:43 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:43.030942 | orchestrator | 2026-01-08 01:03:43 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:43.030962 | orchestrator | 2026-01-08 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:46.062827 | orchestrator | 2026-01-08 01:03:46 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:46.063305 | orchestrator | 2026-01-08 01:03:46 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:46.064261 | orchestrator | 2026-01-08 01:03:46 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:46.064880 | orchestrator | 2026-01-08 01:03:46 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:46.065052 | orchestrator | 2026-01-08 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:49.104355 | orchestrator | 2026-01-08 01:03:49 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:49.104941 | orchestrator | 2026-01-08 01:03:49 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:49.105399 | orchestrator | 2026-01-08 01:03:49 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:49.106067 | orchestrator | 2026-01-08 01:03:49 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:49.106172 | orchestrator | 2026-01-08 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:52.140085 | orchestrator | 2026-01-08 01:03:52 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:52.140875 | orchestrator | 2026-01-08 01:03:52 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:52.142077 | orchestrator | 2026-01-08 01:03:52 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:52.143261 | orchestrator | 2026-01-08 01:03:52 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:52.143392 | orchestrator | 2026-01-08 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:55.204352 | orchestrator | 2026-01-08 01:03:55 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:55.204416 | orchestrator | 2026-01-08 01:03:55 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:55.205055 | orchestrator | 2026-01-08 01:03:55 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:55.209991 | orchestrator | 2026-01-08 01:03:55 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:55.210068 | orchestrator | 2026-01-08 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:03:58.248018 | orchestrator | 2026-01-08 01:03:58 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:03:58.248962 | orchestrator | 2026-01-08 01:03:58 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:03:58.249763 | orchestrator | 2026-01-08 01:03:58 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:03:58.250508 | orchestrator | 2026-01-08 01:03:58 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:03:58.250536 | orchestrator | 2026-01-08 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:01.289779 | orchestrator | 2026-01-08 01:04:01 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:01.291422 | orchestrator | 2026-01-08 01:04:01 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:01.292204 | orchestrator | 2026-01-08 01:04:01 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:04:01.293105 | orchestrator | 2026-01-08 01:04:01 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:01.294038 | orchestrator | 2026-01-08 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:04.321479 | orchestrator | 2026-01-08 01:04:04 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:04.324976 | orchestrator | 2026-01-08 01:04:04 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:04.326267 | orchestrator | 2026-01-08 01:04:04 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:04:04.327693 | orchestrator | 2026-01-08 01:04:04 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:04.327749 | orchestrator | 2026-01-08 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:07.435409 | orchestrator | 2026-01-08 01:04:07 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:07.435470 | orchestrator | 2026-01-08 01:04:07 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:07.436629 | orchestrator | 2026-01-08 01:04:07 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state STARTED 2026-01-08 01:04:07.437538 | orchestrator | 2026-01-08 01:04:07 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:07.437587 | orchestrator | 2026-01-08 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:10.462588 | orchestrator | 2026-01-08 01:04:10 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:10.463020 | orchestrator | 2026-01-08 01:04:10 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:10.464607 | orchestrator | 2026-01-08 01:04:10 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:10.465863 | orchestrator | 2026-01-08 01:04:10 | INFO  | Task 6a9b2eff-804a-4bca-9276-8f38ea3ecf50 is in state SUCCESS 2026-01-08 01:04:10.479665 | orchestrator | 2026-01-08 01:04:10.479709 | orchestrator | 2026-01-08 01:04:10.479715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:04:10.479719 | orchestrator | 2026-01-08 01:04:10.479723 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:04:10.479728 | orchestrator | Thursday 08 January 2026 01:01:40 +0000 (0:00:00.205) 0:00:00.205 ****** 2026-01-08 01:04:10.479732 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:04:10.479736 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:04:10.479740 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:04:10.479744 | orchestrator | 2026-01-08 01:04:10.479747 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:04:10.479751 | orchestrator | Thursday 08 January 2026 01:01:40 +0000 (0:00:00.440) 0:00:00.646 ****** 2026-01-08 01:04:10.479755 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-08 01:04:10.479759 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-08 01:04:10.479763 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-08 01:04:10.479767 | orchestrator | 2026-01-08 01:04:10.479771 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-08 01:04:10.479775 | orchestrator | 2026-01-08 01:04:10.479778 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-08 01:04:10.479782 | orchestrator | Thursday 08 January 2026 01:01:41 +0000 (0:00:01.042) 0:00:01.688 ****** 2026-01-08 01:04:10.479785 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:04:10.479789 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:04:10.479793 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:04:10.479810 | orchestrator | 2026-01-08 01:04:10.479816 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:04:10.479820 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:04:10.479824 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:04:10.479828 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:04:10.479831 | orchestrator | 2026-01-08 01:04:10.479834 | orchestrator | 2026-01-08 01:04:10.479844 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:04:10.479848 | orchestrator | Thursday 08 January 2026 01:01:42 +0000 (0:00:00.988) 0:00:02.677 ****** 2026-01-08 01:04:10.479851 | orchestrator | =============================================================================== 2026-01-08 01:04:10.479854 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2026-01-08 01:04:10.479857 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.99s 2026-01-08 01:04:10.479860 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2026-01-08 01:04:10.479863 | orchestrator | 2026-01-08 01:04:10.479866 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-08 01:04:10.479870 | orchestrator | 2.16.14 2026-01-08 01:04:10.479873 | orchestrator | 2026-01-08 01:04:10.479877 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-08 01:04:10.479880 | orchestrator | 2026-01-08 01:04:10.479883 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-08 01:04:10.479895 | orchestrator | Thursday 08 January 2026 01:01:40 +0000 (0:00:00.305) 0:00:00.305 ****** 2026-01-08 01:04:10.479898 | orchestrator | changed: [testbed-manager] 2026-01-08 01:04:10.479901 | orchestrator | 2026-01-08 01:04:10.479904 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-08 01:04:10.479908 | orchestrator | Thursday 08 January 2026 01:01:42 +0000 (0:00:01.684) 0:00:01.989 ****** 2026-01-08 01:04:10.479911 | orchestrator | changed: [testbed-manager] 2026-01-08 01:04:10.479931 | orchestrator | 2026-01-08 01:04:10.479935 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-08 01:04:10.479938 | orchestrator | Thursday 08 January 2026 01:01:43 +0000 (0:00:01.018) 0:00:03.008 ****** 2026-01-08 01:04:10.479941 | orchestrator | changed: [testbed-manager] 2026-01-08 01:04:10.479944 | orchestrator | 2026-01-08 01:04:10.479947 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-08 01:04:10.479951 | orchestrator | Thursday 08 January 2026 01:01:44 +0000 (0:00:01.089) 0:00:04.098 ****** 2026-01-08 01:04:10.479954 | orchestrator | changed: [testbed-manager] 2026-01-08 01:04:10.479957 | orchestrator | 2026-01-08 01:04:10.479960 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-08 01:04:10.479963 | orchestrator | Thursday 08 January 2026 01:01:45 +0000 (0:00:01.283) 0:00:05.381 ****** 2026-01-08 01:04:10.479966 | orchestrator | changed: [testbed-manager] 2026-01-08 01:04:10.479969 | orchestrator | 2026-01-08 01:04:10.479973 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-08 01:04:10.479976 | orchestrator | Thursday 08 January 2026 01:01:46 +0000 (0:00:01.108) 0:00:06.490 ****** 2026-01-08 01:04:10.479979 | orchestrator | changed: [testbed-manager] 2026-01-08 01:04:10.479982 | orchestrator | 2026-01-08 01:04:10.479985 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-08 01:04:10.480000 | orchestrator | Thursday 08 January 2026 01:01:47 +0000 (0:00:01.112) 0:00:07.602 ****** 2026-01-08 01:04:10.480003 | orchestrator | changed: [testbed-manager] 2026-01-08 01:04:10.480007 | orchestrator | 2026-01-08 01:04:10.480010 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-08 01:04:10.480013 | orchestrator | Thursday 08 January 2026 01:01:49 +0000 (0:00:02.119) 0:00:09.722 ****** 2026-01-08 01:04:10.480017 | orchestrator | changed: [testbed-manager] 2026-01-08 01:04:10.480020 | orchestrator | 2026-01-08 01:04:10.480023 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-08 01:04:10.480026 | orchestrator | Thursday 08 January 2026 01:01:50 +0000 (0:00:00.929) 0:00:10.651 ****** 2026-01-08 01:04:10.480029 | orchestrator | changed: [testbed-manager] 2026-01-08 01:04:10.480059 | orchestrator | 2026-01-08 01:04:10.480070 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-08 01:04:10.480084 | orchestrator | Thursday 08 January 2026 01:02:51 +0000 (0:01:00.350) 0:01:11.002 ****** 2026-01-08 01:04:10.480088 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:04:10.480091 | orchestrator | 2026-01-08 01:04:10.480094 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-08 01:04:10.480097 | orchestrator | 2026-01-08 01:04:10.480100 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-08 01:04:10.480103 | orchestrator | Thursday 08 January 2026 01:02:51 +0000 (0:00:00.131) 0:01:11.134 ****** 2026-01-08 01:04:10.480106 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:04:10.480109 | orchestrator | 2026-01-08 01:04:10.480113 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-08 01:04:10.480116 | orchestrator | 2026-01-08 01:04:10.480119 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-08 01:04:10.480122 | orchestrator | Thursday 08 January 2026 01:03:02 +0000 (0:00:11.480) 0:01:22.614 ****** 2026-01-08 01:04:10.480125 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:04:10.480128 | orchestrator | 2026-01-08 01:04:10.480131 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-08 01:04:10.480137 | orchestrator | 2026-01-08 01:04:10.480141 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-08 01:04:10.480144 | orchestrator | Thursday 08 January 2026 01:03:14 +0000 (0:00:11.366) 0:01:33.981 ****** 2026-01-08 01:04:10.480147 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:04:10.480150 | orchestrator | 2026-01-08 01:04:10.480153 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:04:10.480156 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-08 01:04:10.480160 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:04:10.480163 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:04:10.480169 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:04:10.480172 | orchestrator | 2026-01-08 01:04:10.480175 | orchestrator | 2026-01-08 01:04:10.480178 | orchestrator | 2026-01-08 01:04:10.480182 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:04:10.480185 | orchestrator | Thursday 08 January 2026 01:03:15 +0000 (0:00:01.296) 0:01:35.278 ****** 2026-01-08 01:04:10.480188 | orchestrator | =============================================================================== 2026-01-08 01:04:10.480191 | orchestrator | Create admin user ------------------------------------------------------ 60.35s 2026-01-08 01:04:10.480194 | orchestrator | Restart ceph manager service ------------------------------------------- 24.15s 2026-01-08 01:04:10.480197 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.12s 2026-01-08 01:04:10.480200 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.68s 2026-01-08 01:04:10.480204 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.28s 2026-01-08 01:04:10.480207 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.11s 2026-01-08 01:04:10.480210 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.11s 2026-01-08 01:04:10.480213 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.09s 2026-01-08 01:04:10.480216 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.02s 2026-01-08 01:04:10.480219 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.93s 2026-01-08 01:04:10.480222 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2026-01-08 01:04:10.480225 | orchestrator | 2026-01-08 01:04:10.480228 | orchestrator | 2026-01-08 01:04:10.480231 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:04:10.480234 | orchestrator | 2026-01-08 01:04:10.480237 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:04:10.480240 | orchestrator | Thursday 08 January 2026 01:01:46 +0000 (0:00:00.606) 0:00:00.606 ****** 2026-01-08 01:04:10.480244 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:04:10.480247 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:04:10.480250 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:04:10.480253 | orchestrator | 2026-01-08 01:04:10.480256 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:04:10.480259 | orchestrator | Thursday 08 January 2026 01:01:47 +0000 (0:00:00.568) 0:00:01.174 ****** 2026-01-08 01:04:10.480262 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-08 01:04:10.480265 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-08 01:04:10.480269 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-08 01:04:10.480272 | orchestrator | 2026-01-08 01:04:10.480275 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-08 01:04:10.480280 | orchestrator | 2026-01-08 01:04:10.480283 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-08 01:04:10.480286 | orchestrator | Thursday 08 January 2026 01:01:48 +0000 (0:00:00.674) 0:00:01.849 ****** 2026-01-08 01:04:10.480290 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:04:10.480293 | orchestrator | 2026-01-08 01:04:10.480296 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting services] ************* 2026-01-08 01:04:10.480299 | orchestrator | Thursday 08 January 2026 01:01:48 +0000 (0:00:00.611) 0:00:02.461 ****** 2026-01-08 01:04:10.480302 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-08 01:04:10.480305 | orchestrator | 2026-01-08 01:04:10.480311 | orchestrator | TASK [service-ks-register : barbican | Creating/deleting endpoints] ************ 2026-01-08 01:04:10.480314 | orchestrator | Thursday 08 January 2026 01:01:53 +0000 (0:00:04.676) 0:00:07.137 ****** 2026-01-08 01:04:10.480317 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-08 01:04:10.480320 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-08 01:04:10.480324 | orchestrator | 2026-01-08 01:04:10.480327 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-08 01:04:10.480330 | orchestrator | Thursday 08 January 2026 01:01:59 +0000 (0:00:06.575) 0:00:13.713 ****** 2026-01-08 01:04:10.480333 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-08 01:04:10.480336 | orchestrator | 2026-01-08 01:04:10.480339 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-08 01:04:10.480343 | orchestrator | Thursday 08 January 2026 01:02:03 +0000 (0:00:03.909) 0:00:17.623 ****** 2026-01-08 01:04:10.480346 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:04:10.480349 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-08 01:04:10.480352 | orchestrator | 2026-01-08 01:04:10.480355 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-08 01:04:10.480358 | orchestrator | Thursday 08 January 2026 01:02:07 +0000 (0:00:03.853) 0:00:21.476 ****** 2026-01-08 01:04:10.480361 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:04:10.480365 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-08 01:04:10.480368 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-08 01:04:10.480371 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-08 01:04:10.480374 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-08 01:04:10.480377 | orchestrator | 2026-01-08 01:04:10.480380 | orchestrator | TASK [service-ks-register : barbican | Granting/revoking user roles] *********** 2026-01-08 01:04:10.480383 | orchestrator | Thursday 08 January 2026 01:02:23 +0000 (0:00:16.206) 0:00:37.683 ****** 2026-01-08 01:04:10.480386 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-08 01:04:10.480389 | orchestrator | 2026-01-08 01:04:10.480395 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-08 01:04:10.480398 | orchestrator | Thursday 08 January 2026 01:02:28 +0000 (0:00:04.317) 0:00:42.000 ****** 2026-01-08 01:04:10.480404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.480411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.480418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.480422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480450 | orchestrator | 2026-01-08 01:04:10.480453 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-08 01:04:10.480457 | orchestrator | Thursday 08 January 2026 01:02:30 +0000 (0:00:02.658) 0:00:44.659 ****** 2026-01-08 01:04:10.480460 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-08 01:04:10.480463 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-08 01:04:10.480466 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-08 01:04:10.480469 | orchestrator | 2026-01-08 01:04:10.480472 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-08 01:04:10.480475 | orchestrator | Thursday 08 January 2026 01:02:32 +0000 (0:00:01.398) 0:00:46.057 ****** 2026-01-08 01:04:10.480479 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:04:10.480482 | orchestrator | 2026-01-08 01:04:10.480485 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-08 01:04:10.480488 | orchestrator | Thursday 08 January 2026 01:02:32 +0000 (0:00:00.186) 0:00:46.244 ****** 2026-01-08 01:04:10.480491 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:04:10.480494 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:04:10.480497 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:04:10.480500 | orchestrator | 2026-01-08 01:04:10.480504 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-08 01:04:10.480507 | orchestrator | Thursday 08 January 2026 01:02:33 +0000 (0:00:00.632) 0:00:46.877 ****** 2026-01-08 01:04:10.480510 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:04:10.480513 | orchestrator | 2026-01-08 01:04:10.480518 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-08 01:04:10.480523 | orchestrator | Thursday 08 January 2026 01:02:34 +0000 (0:00:01.362) 0:00:48.239 ****** 2026-01-08 01:04:10.480527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.480531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.480537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.480540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480570 | orchestrator | 2026-01-08 01:04:10.480575 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-08 01:04:10.480585 | orchestrator | Thursday 08 January 2026 01:02:38 +0000 (0:00:03.813) 0:00:52.053 ****** 2026-01-08 01:04:10.480596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.480613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480623 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:04:10.480628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.480638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480651 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:04:10.480659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.480666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480679 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:04:10.480685 | orchestrator | 2026-01-08 01:04:10.480691 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-08 01:04:10.480697 | orchestrator | Thursday 08 January 2026 01:02:40 +0000 (0:00:02.548) 0:00:54.601 ****** 2026-01-08 01:04:10.480892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.480901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480915 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:04:10.480918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.480922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480931 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:04:10.480934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.480942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.480948 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:04:10.480951 | orchestrator | 2026-01-08 01:04:10.480954 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-08 01:04:10.480958 | orchestrator | Thursday 08 January 2026 01:02:42 +0000 (0:00:01.906) 0:00:56.508 ****** 2026-01-08 01:04:10.480961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.480967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.480974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.480978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.480999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.481002 | orchestrator | 2026-01-08 01:04:10.481005 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-08 01:04:10.481008 | orchestrator | Thursday 08 January 2026 01:02:47 +0000 (0:00:04.505) 0:01:01.013 ****** 2026-01-08 01:04:10.481011 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:04:10.481015 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:04:10.481019 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:04:10.481022 | orchestrator | 2026-01-08 01:04:10.481025 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-08 01:04:10.481029 | orchestrator | Thursday 08 January 2026 01:02:49 +0000 (0:00:02.639) 0:01:03.652 ****** 2026-01-08 01:04:10.481032 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:04:10.481035 | orchestrator | 2026-01-08 01:04:10.481038 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-08 01:04:10.481041 | orchestrator | Thursday 08 January 2026 01:02:50 +0000 (0:00:00.919) 0:01:04.572 ****** 2026-01-08 01:04:10.481044 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:04:10.481047 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:04:10.481050 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:04:10.481053 | orchestrator | 2026-01-08 01:04:10.481056 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-08 01:04:10.481059 | orchestrator | Thursday 08 January 2026 01:02:51 +0000 (0:00:00.918) 0:01:05.492 ****** 2026-01-08 01:04:10.481063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.481068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.481074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.481079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.481083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.481086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.481089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.481097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.481101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.481104 | orchestrator | 2026-01-08 01:04:10.481107 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-08 01:04:10.481110 | orchestrator | Thursday 08 January 2026 01:03:04 +0000 (0:00:12.444) 0:01:17.936 ****** 2026-01-08 01:04:10.481118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.481122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.481125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.481130 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:04:10.481136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.481142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.481149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.481155 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:04:10.481160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.481165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.481174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.481179 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:04:10.481184 | orchestrator | 2026-01-08 01:04:10.481191 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-01-08 01:04:10.481197 | orchestrator | Thursday 08 January 2026 01:03:04 +0000 (0:00:00.648) 0:01:18.585 ****** 2026-01-08 01:04:10.481202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.481210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.481216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:04:10.481225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.481234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.t2026-01-08 01:04:10 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:10.481240 | orchestrator | 2026-01-08 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:10.482396 | orchestrator | ech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.482429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.482442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.482448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.482460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:04:10.482465 | orchestrator | 2026-01-08 01:04:10.482471 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-01-08 01:04:10.482476 | orchestrator | Thursday 08 January 2026 01:03:08 +0000 (0:00:03.986) 0:01:22.571 ****** 2026-01-08 01:04:10.482482 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:04:10.482487 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:04:10.482492 | orchestrator | } 2026-01-08 01:04:10.482497 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:04:10.482500 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:04:10.482504 | orchestrator | } 2026-01-08 01:04:10.482507 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:04:10.482510 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:04:10.482513 | orchestrator | } 2026-01-08 01:04:10.482516 | orchestrator | 2026-01-08 01:04:10.482519 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:04:10.482522 | orchestrator | Thursday 08 January 2026 01:03:09 +0000 (0:00:00.895) 0:01:23.466 ****** 2026-01-08 01:04:10.482531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.482534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.482540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.482546 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:04:10.482549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.482553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.482556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.482561 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:04:10.482564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2025.1', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:04:10.482568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2025.1', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.482574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2025.1', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:04:10.482577 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:04:10.482580 | orchestrator | 2026-01-08 01:04:10.482583 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-08 01:04:10.482586 | orchestrator | Thursday 08 January 2026 01:03:11 +0000 (0:00:01.577) 0:01:25.043 ****** 2026-01-08 01:04:10.482590 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:04:10.482593 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:04:10.482596 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:04:10.482599 | orchestrator | 2026-01-08 01:04:10.482602 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-08 01:04:10.482605 | orchestrator | Thursday 08 January 2026 01:03:11 +0000 (0:00:00.345) 0:01:25.389 ****** 2026-01-08 01:04:10.482608 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:04:10.482611 | orchestrator | 2026-01-08 01:04:10.482614 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-08 01:04:10.482617 | orchestrator | Thursday 08 January 2026 01:03:14 +0000 (0:00:02.825) 0:01:28.215 ****** 2026-01-08 01:04:10.482620 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:04:10.482623 | orchestrator | 2026-01-08 01:04:10.482642 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-08 01:04:10.482646 | orchestrator | Thursday 08 January 2026 01:03:17 +0000 (0:00:02.664) 0:01:30.879 ****** 2026-01-08 01:04:10.482649 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:04:10.482652 | orchestrator | 2026-01-08 01:04:10.482655 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-08 01:04:10.482658 | orchestrator | Thursday 08 January 2026 01:03:28 +0000 (0:00:10.939) 0:01:41.819 ****** 2026-01-08 01:04:10.482661 | orchestrator | 2026-01-08 01:04:10.482664 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-08 01:04:10.482667 | orchestrator | Thursday 08 January 2026 01:03:28 +0000 (0:00:00.228) 0:01:42.048 ****** 2026-01-08 01:04:10.482671 | orchestrator | 2026-01-08 01:04:10.482674 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-08 01:04:10.482677 | orchestrator | Thursday 08 January 2026 01:03:28 +0000 (0:00:00.267) 0:01:42.317 ****** 2026-01-08 01:04:10.482680 | orchestrator | 2026-01-08 01:04:10.482683 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-08 01:04:10.482686 | orchestrator | Thursday 08 January 2026 01:03:28 +0000 (0:00:00.249) 0:01:42.566 ****** 2026-01-08 01:04:10.482689 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:04:10.482692 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:04:10.482695 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:04:10.482699 | orchestrator | 2026-01-08 01:04:10.482702 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-08 01:04:10.482707 | orchestrator | Thursday 08 January 2026 01:03:43 +0000 (0:00:14.574) 0:01:57.140 ****** 2026-01-08 01:04:10.482710 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:04:10.482713 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:04:10.482716 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:04:10.482719 | orchestrator | 2026-01-08 01:04:10.482722 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-08 01:04:10.482725 | orchestrator | Thursday 08 January 2026 01:03:55 +0000 (0:00:11.651) 0:02:08.792 ****** 2026-01-08 01:04:10.482731 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:04:10.482734 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:04:10.482737 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:04:10.482740 | orchestrator | 2026-01-08 01:04:10.482743 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:04:10.482747 | orchestrator | testbed-node-0 : ok=25  changed=20  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-08 01:04:10.482751 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:04:10.482754 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:04:10.482757 | orchestrator | 2026-01-08 01:04:10.482760 | orchestrator | 2026-01-08 01:04:10.482763 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:04:10.482766 | orchestrator | Thursday 08 January 2026 01:04:08 +0000 (0:00:13.506) 0:02:22.298 ****** 2026-01-08 01:04:10.482769 | orchestrator | =============================================================================== 2026-01-08 01:04:10.482772 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.21s 2026-01-08 01:04:10.482785 | orchestrator | barbican : Restart barbican-api container ------------------------------ 14.57s 2026-01-08 01:04:10.482788 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.51s 2026-01-08 01:04:10.482791 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.44s 2026-01-08 01:04:10.482795 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.65s 2026-01-08 01:04:10.482807 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.94s 2026-01-08 01:04:10.482811 | orchestrator | service-ks-register : barbican | Creating/deleting endpoints ------------ 6.58s 2026-01-08 01:04:10.482814 | orchestrator | service-ks-register : barbican | Creating/deleting services ------------- 4.68s 2026-01-08 01:04:10.482818 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.51s 2026-01-08 01:04:10.482823 | orchestrator | service-ks-register : barbican | Granting/revoking user roles ----------- 4.32s 2026-01-08 01:04:10.482828 | orchestrator | service-check-containers : barbican | Check containers ------------------ 3.99s 2026-01-08 01:04:10.482833 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.91s 2026-01-08 01:04:10.482839 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.85s 2026-01-08 01:04:10.482844 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.81s 2026-01-08 01:04:10.482849 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.83s 2026-01-08 01:04:10.482853 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.66s 2026-01-08 01:04:10.482859 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.66s 2026-01-08 01:04:10.482864 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.64s 2026-01-08 01:04:10.482869 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.55s 2026-01-08 01:04:10.482875 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.91s 2026-01-08 01:04:13.492537 | orchestrator | 2026-01-08 01:04:13 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:13.493206 | orchestrator | 2026-01-08 01:04:13 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:13.493900 | orchestrator | 2026-01-08 01:04:13 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:13.494555 | orchestrator | 2026-01-08 01:04:13 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:13.494589 | orchestrator | 2026-01-08 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:16.527325 | orchestrator | 2026-01-08 01:04:16 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:16.527718 | orchestrator | 2026-01-08 01:04:16 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:16.528441 | orchestrator | 2026-01-08 01:04:16 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:16.529150 | orchestrator | 2026-01-08 01:04:16 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:16.529220 | orchestrator | 2026-01-08 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:19.560767 | orchestrator | 2026-01-08 01:04:19 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:19.562297 | orchestrator | 2026-01-08 01:04:19 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:19.564023 | orchestrator | 2026-01-08 01:04:19 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:19.565494 | orchestrator | 2026-01-08 01:04:19 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:19.565530 | orchestrator | 2026-01-08 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:22.595064 | orchestrator | 2026-01-08 01:04:22 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:22.595118 | orchestrator | 2026-01-08 01:04:22 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:22.595587 | orchestrator | 2026-01-08 01:04:22 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:22.596154 | orchestrator | 2026-01-08 01:04:22 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:22.596218 | orchestrator | 2026-01-08 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:25.686105 | orchestrator | 2026-01-08 01:04:25 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:25.687104 | orchestrator | 2026-01-08 01:04:25 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:25.688324 | orchestrator | 2026-01-08 01:04:25 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:25.689138 | orchestrator | 2026-01-08 01:04:25 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:25.689170 | orchestrator | 2026-01-08 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:28.729531 | orchestrator | 2026-01-08 01:04:28 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:28.729844 | orchestrator | 2026-01-08 01:04:28 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:28.731025 | orchestrator | 2026-01-08 01:04:28 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:28.734313 | orchestrator | 2026-01-08 01:04:28 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:28.734361 | orchestrator | 2026-01-08 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:31.784102 | orchestrator | 2026-01-08 01:04:31 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:31.788038 | orchestrator | 2026-01-08 01:04:31 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:31.789714 | orchestrator | 2026-01-08 01:04:31 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:31.791667 | orchestrator | 2026-01-08 01:04:31 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:31.791756 | orchestrator | 2026-01-08 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:34.899397 | orchestrator | 2026-01-08 01:04:34 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:34.899726 | orchestrator | 2026-01-08 01:04:34 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:34.900507 | orchestrator | 2026-01-08 01:04:34 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:34.901250 | orchestrator | 2026-01-08 01:04:34 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:34.901303 | orchestrator | 2026-01-08 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:37.943129 | orchestrator | 2026-01-08 01:04:37 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:37.944042 | orchestrator | 2026-01-08 01:04:37 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:37.945103 | orchestrator | 2026-01-08 01:04:37 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:37.946256 | orchestrator | 2026-01-08 01:04:37 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:37.946306 | orchestrator | 2026-01-08 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:40.996170 | orchestrator | 2026-01-08 01:04:40 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:40.999391 | orchestrator | 2026-01-08 01:04:41 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:41.001639 | orchestrator | 2026-01-08 01:04:41 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:41.003601 | orchestrator | 2026-01-08 01:04:41 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:41.003660 | orchestrator | 2026-01-08 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:44.052940 | orchestrator | 2026-01-08 01:04:44 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:44.053672 | orchestrator | 2026-01-08 01:04:44 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:44.055993 | orchestrator | 2026-01-08 01:04:44 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:44.057491 | orchestrator | 2026-01-08 01:04:44 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:44.057528 | orchestrator | 2026-01-08 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:47.098204 | orchestrator | 2026-01-08 01:04:47 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:47.099242 | orchestrator | 2026-01-08 01:04:47 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:47.101534 | orchestrator | 2026-01-08 01:04:47 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:47.102216 | orchestrator | 2026-01-08 01:04:47 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:47.102245 | orchestrator | 2026-01-08 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:50.150553 | orchestrator | 2026-01-08 01:04:50 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:50.152695 | orchestrator | 2026-01-08 01:04:50 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:50.154701 | orchestrator | 2026-01-08 01:04:50 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:50.156990 | orchestrator | 2026-01-08 01:04:50 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:50.157051 | orchestrator | 2026-01-08 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:53.258419 | orchestrator | 2026-01-08 01:04:53 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state STARTED 2026-01-08 01:04:53.261750 | orchestrator | 2026-01-08 01:04:53 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:53.265115 | orchestrator | 2026-01-08 01:04:53 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:53.267325 | orchestrator | 2026-01-08 01:04:53 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:53.267786 | orchestrator | 2026-01-08 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:56.303967 | orchestrator | 2026-01-08 01:04:56 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:04:56.304029 | orchestrator | 2026-01-08 01:04:56 | INFO  | Task 993b5407-d503-48eb-9395-f5ab7cdec5f2 is in state SUCCESS 2026-01-08 01:04:56.304759 | orchestrator | 2026-01-08 01:04:56 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:56.306447 | orchestrator | 2026-01-08 01:04:56 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:56.307702 | orchestrator | 2026-01-08 01:04:56 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:56.307732 | orchestrator | 2026-01-08 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:04:59.356981 | orchestrator | 2026-01-08 01:04:59 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:04:59.359798 | orchestrator | 2026-01-08 01:04:59 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:04:59.360799 | orchestrator | 2026-01-08 01:04:59 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:04:59.362101 | orchestrator | 2026-01-08 01:04:59 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state STARTED 2026-01-08 01:04:59.364501 | orchestrator | 2026-01-08 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:02.407252 | orchestrator | 2026-01-08 01:05:02 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:02.412778 | orchestrator | 2026-01-08 01:05:02 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:05:02.414116 | orchestrator | 2026-01-08 01:05:02 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:02.419252 | orchestrator | 2026-01-08 01:05:02 | INFO  | Task 47a3fc76-db33-4afb-9b62-e25a260d0149 is in state SUCCESS 2026-01-08 01:05:02.421430 | orchestrator | 2026-01-08 01:05:02.421482 | orchestrator | 2026-01-08 01:05:02.421490 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-08 01:05:02.421530 | orchestrator | 2026-01-08 01:05:02.421538 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-08 01:05:02.421545 | orchestrator | Thursday 08 January 2026 01:04:19 +0000 (0:00:00.083) 0:00:00.083 ****** 2026-01-08 01:05:02.421551 | orchestrator | changed: [localhost] 2026-01-08 01:05:02.421559 | orchestrator | 2026-01-08 01:05:02.421566 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-08 01:05:02.421588 | orchestrator | Thursday 08 January 2026 01:04:20 +0000 (0:00:00.974) 0:00:01.057 ****** 2026-01-08 01:05:02.421594 | orchestrator | changed: [localhost] 2026-01-08 01:05:02.421601 | orchestrator | 2026-01-08 01:05:02.421609 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-08 01:05:02.421615 | orchestrator | Thursday 08 January 2026 01:04:48 +0000 (0:00:28.403) 0:00:29.461 ****** 2026-01-08 01:05:02.421621 | orchestrator | changed: [localhost] 2026-01-08 01:05:02.421628 | orchestrator | 2026-01-08 01:05:02.421634 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:05:02.421641 | orchestrator | 2026-01-08 01:05:02.421648 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:05:02.421654 | orchestrator | Thursday 08 January 2026 01:04:52 +0000 (0:00:04.144) 0:00:33.605 ****** 2026-01-08 01:05:02.421661 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:05:02.421667 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:05:02.421691 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:05:02.421699 | orchestrator | 2026-01-08 01:05:02.421705 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:05:02.421711 | orchestrator | Thursday 08 January 2026 01:04:53 +0000 (0:00:00.319) 0:00:33.925 ****** 2026-01-08 01:05:02.421718 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-08 01:05:02.421725 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-08 01:05:02.421732 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-08 01:05:02.421738 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-08 01:05:02.421745 | orchestrator | 2026-01-08 01:05:02.421752 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-08 01:05:02.421759 | orchestrator | skipping: no hosts matched 2026-01-08 01:05:02.421765 | orchestrator | 2026-01-08 01:05:02.421772 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:05:02.421779 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:05:02.421788 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:05:02.421796 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:05:02.421803 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:05:02.421860 | orchestrator | 2026-01-08 01:05:02.421868 | orchestrator | 2026-01-08 01:05:02.421874 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:05:02.421881 | orchestrator | Thursday 08 January 2026 01:04:53 +0000 (0:00:00.654) 0:00:34.580 ****** 2026-01-08 01:05:02.421887 | orchestrator | =============================================================================== 2026-01-08 01:05:02.421893 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 28.40s 2026-01-08 01:05:02.421899 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.14s 2026-01-08 01:05:02.421906 | orchestrator | Ensure the destination directory exists --------------------------------- 0.97s 2026-01-08 01:05:02.421912 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-01-08 01:05:02.421918 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-08 01:05:02.421924 | orchestrator | 2026-01-08 01:05:02.421930 | orchestrator | 2026-01-08 01:05:02.421937 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:05:02.421943 | orchestrator | 2026-01-08 01:05:02.421949 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:05:02.421955 | orchestrator | Thursday 08 January 2026 01:01:40 +0000 (0:00:00.475) 0:00:00.475 ****** 2026-01-08 01:05:02.421966 | orchestrator | ok: [testbed-manager] 2026-01-08 01:05:02.421972 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:05:02.421978 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:05:02.421985 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:05:02.421991 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:05:02.421996 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:05:02.422003 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:05:02.422009 | orchestrator | 2026-01-08 01:05:02.422050 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:05:02.422062 | orchestrator | Thursday 08 January 2026 01:01:41 +0000 (0:00:01.270) 0:00:01.746 ****** 2026-01-08 01:05:02.422068 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-08 01:05:02.422075 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-08 01:05:02.422081 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-08 01:05:02.422087 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-08 01:05:02.422093 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-08 01:05:02.422099 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-08 01:05:02.422105 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-08 01:05:02.422111 | orchestrator | 2026-01-08 01:05:02.422118 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-08 01:05:02.422124 | orchestrator | 2026-01-08 01:05:02.422141 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-08 01:05:02.422147 | orchestrator | Thursday 08 January 2026 01:01:42 +0000 (0:00:01.044) 0:00:02.791 ****** 2026-01-08 01:05:02.422153 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 01:05:02.422161 | orchestrator | 2026-01-08 01:05:02.422167 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-08 01:05:02.422172 | orchestrator | Thursday 08 January 2026 01:01:44 +0000 (0:00:02.094) 0:00:04.886 ****** 2026-01-08 01:05:02.422185 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-08 01:05:02.422195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422202 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422246 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422253 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422302 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:02.422309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422405 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422429 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422490 | orchestrator | 2026-01-08 01:05:02.422496 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-08 01:05:02.422503 | orchestrator | Thursday 08 January 2026 01:01:49 +0000 (0:00:04.263) 0:00:09.149 ****** 2026-01-08 01:05:02.422509 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 01:05:02.422515 | orchestrator | 2026-01-08 01:05:02.422521 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-08 01:05:02.422527 | orchestrator | Thursday 08 January 2026 01:01:50 +0000 (0:00:01.209) 0:00:10.359 ****** 2026-01-08 01:05:02.422537 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-08 01:05:02.422547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422596 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.422606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422648 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422747 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:02.422764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.422784 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.422819 | orchestrator | 2026-01-08 01:05:02.422826 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-08 01:05:02.422833 | orchestrator | Thursday 08 January 2026 01:01:56 +0000 (0:00:05.802) 0:00:16.161 ****** 2026-01-08 01:05:02.422940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.422949 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-08 01:05:02.422956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.422963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.422970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.422982 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.422989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423041 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423058 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:02.423070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423093 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.423100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423107 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.423114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423120 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423127 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.423134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423144 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.423154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423178 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.423184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423222 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.423229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423235 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.423242 | orchestrator | 2026-01-08 01:05:02.423248 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-08 01:05:02.423255 | orchestrator | Thursday 08 January 2026 01:01:58 +0000 (0:00:02.396) 0:00:18.558 ****** 2026-01-08 01:05:02.423266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423280 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-08 01:05:02.423287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423331 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423344 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.423350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423373 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423412 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.423428 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:02.423439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423739 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.423745 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.423752 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.423761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.423781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423791 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.423797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423820 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.423827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.423833 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.423853 | orchestrator | 2026-01-08 01:05:02.423859 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-08 01:05:02.423865 | orchestrator | Thursday 08 January 2026 01:02:01 +0000 (0:00:02.741) 0:00:21.299 ****** 2026-01-08 01:05:02.423874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.423881 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-08 01:05:02.423890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.423897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.423906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.423913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.423921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.423928 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.423934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.423943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.423950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.423956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.423979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.423986 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.423995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.424002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.424009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.424019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.424026 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.424033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.424043 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:02.424053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.424060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.424072 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.424078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.424085 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.424091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.424101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.424107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.424113 | orchestrator | 2026-01-08 01:05:02.424119 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-08 01:05:02.424125 | orchestrator | Thursday 08 January 2026 01:02:07 +0000 (0:00:05.918) 0:00:27.217 ****** 2026-01-08 01:05:02.424132 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:05:02.424138 | orchestrator | 2026-01-08 01:05:02.424144 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-08 01:05:02.424153 | orchestrator | Thursday 08 January 2026 01:02:08 +0000 (0:00:01.289) 0:00:28.507 ****** 2026-01-08 01:05:02.424159 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.424168 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.424175 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.424181 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.424187 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.424193 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.424199 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.424205 | orchestrator | 2026-01-08 01:05:02.424211 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-08 01:05:02.424217 | orchestrator | Thursday 08 January 2026 01:02:09 +0000 (0:00:00.700) 0:00:29.207 ****** 2026-01-08 01:05:02.424223 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:05:02.424229 | orchestrator | 2026-01-08 01:05:02.424236 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-08 01:05:02.424242 | orchestrator | Thursday 08 January 2026 01:02:10 +0000 (0:00:00.751) 0:00:29.959 ****** 2026-01-08 01:05:02.424248 | orchestrator | [WARNING]: Skipped 2026-01-08 01:05:02.424254 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424261 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-08 01:05:02.424267 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424273 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-08 01:05:02.424279 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:05:02.424285 | orchestrator | [WARNING]: Skipped 2026-01-08 01:05:02.424291 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424297 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-08 01:05:02.424303 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424309 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-08 01:05:02.424315 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:05:02.424322 | orchestrator | [WARNING]: Skipped 2026-01-08 01:05:02.424327 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424333 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-08 01:05:02.424339 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424345 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-08 01:05:02.424351 | orchestrator | [WARNING]: Skipped 2026-01-08 01:05:02.424358 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424364 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-08 01:05:02.424371 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424377 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-08 01:05:02.424383 | orchestrator | [WARNING]: Skipped 2026-01-08 01:05:02.424388 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424394 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-08 01:05:02.424400 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424407 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-08 01:05:02.424414 | orchestrator | [WARNING]: Skipped 2026-01-08 01:05:02.424421 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424427 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-08 01:05:02.424433 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424439 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-08 01:05:02.424445 | orchestrator | [WARNING]: Skipped 2026-01-08 01:05:02.424452 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424461 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-08 01:05:02.424468 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-08 01:05:02.424474 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-08 01:05:02.424480 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-08 01:05:02.424489 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-08 01:05:02.424495 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-08 01:05:02.424501 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-08 01:05:02.424507 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-08 01:05:02.424513 | orchestrator | 2026-01-08 01:05:02.424519 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-08 01:05:02.424525 | orchestrator | Thursday 08 January 2026 01:02:11 +0000 (0:00:01.726) 0:00:31.686 ****** 2026-01-08 01:05:02.424532 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-08 01:05:02.424538 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.424546 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-08 01:05:02.424551 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.424558 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-08 01:05:02.424564 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.424570 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-08 01:05:02.424576 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.424585 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-08 01:05:02.424591 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.424598 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-08 01:05:02.424604 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.424610 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-08 01:05:02.424616 | orchestrator | 2026-01-08 01:05:02.424622 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-08 01:05:02.424629 | orchestrator | Thursday 08 January 2026 01:02:28 +0000 (0:00:16.668) 0:00:48.355 ****** 2026-01-08 01:05:02.424635 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-08 01:05:02.424642 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.424648 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-08 01:05:02.424654 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.424660 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-08 01:05:02.424666 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.424672 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-08 01:05:02.424679 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.424685 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-08 01:05:02.424691 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.424698 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-08 01:05:02.424704 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.424710 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-08 01:05:02.424716 | orchestrator | 2026-01-08 01:05:02.424722 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-08 01:05:02.424729 | orchestrator | Thursday 08 January 2026 01:02:35 +0000 (0:00:06.657) 0:00:55.012 ****** 2026-01-08 01:05:02.424739 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-08 01:05:02.424746 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.424753 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-08 01:05:02.424759 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-08 01:05:02.424765 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.424771 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.424777 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-08 01:05:02.424783 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.424789 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-08 01:05:02.424795 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.424801 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-08 01:05:02.424807 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-08 01:05:02.424813 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.424819 | orchestrator | 2026-01-08 01:05:02.424825 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-08 01:05:02.424832 | orchestrator | Thursday 08 January 2026 01:02:36 +0000 (0:00:01.756) 0:00:56.769 ****** 2026-01-08 01:05:02.424847 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:05:02.424854 | orchestrator | 2026-01-08 01:05:02.424863 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-08 01:05:02.424869 | orchestrator | Thursday 08 January 2026 01:02:37 +0000 (0:00:00.826) 0:00:57.595 ****** 2026-01-08 01:05:02.424876 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.424882 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.424888 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.424894 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.424900 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.424906 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.424912 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.424918 | orchestrator | 2026-01-08 01:05:02.424924 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-08 01:05:02.424930 | orchestrator | Thursday 08 January 2026 01:02:38 +0000 (0:00:00.682) 0:00:58.278 ****** 2026-01-08 01:05:02.424936 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.424943 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:02.424949 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.424954 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.424960 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:02.424966 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:02.424972 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.424978 | orchestrator | 2026-01-08 01:05:02.424984 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-08 01:05:02.424991 | orchestrator | Thursday 08 January 2026 01:02:41 +0000 (0:00:03.470) 0:01:01.748 ****** 2026-01-08 01:05:02.424999 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-08 01:05:02.425006 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-08 01:05:02.425011 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.425020 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-08 01:05:02.425030 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.425037 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.425043 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-08 01:05:02.425049 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.425055 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-08 01:05:02.425061 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.425067 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-08 01:05:02.425073 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.425079 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-08 01:05:02.425085 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.425091 | orchestrator | 2026-01-08 01:05:02.425097 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-08 01:05:02.425103 | orchestrator | Thursday 08 January 2026 01:02:45 +0000 (0:00:03.199) 0:01:04.948 ****** 2026-01-08 01:05:02.425109 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-08 01:05:02.425116 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-08 01:05:02.425122 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-08 01:05:02.425127 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.425133 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.425139 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.425146 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-08 01:05:02.425151 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.425157 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-08 01:05:02.425164 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-08 01:05:02.425170 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.425176 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-08 01:05:02.425181 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.425188 | orchestrator | 2026-01-08 01:05:02.425194 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-08 01:05:02.425199 | orchestrator | Thursday 08 January 2026 01:02:47 +0000 (0:00:02.851) 0:01:07.799 ****** 2026-01-08 01:05:02.425205 | orchestrator | [WARNING]: Skipped 2026-01-08 01:05:02.425212 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-08 01:05:02.425218 | orchestrator | due to this access issue: 2026-01-08 01:05:02.425224 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-08 01:05:02.425229 | orchestrator | not a directory 2026-01-08 01:05:02.425235 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:05:02.425241 | orchestrator | 2026-01-08 01:05:02.425247 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-08 01:05:02.425253 | orchestrator | Thursday 08 January 2026 01:02:49 +0000 (0:00:02.051) 0:01:09.851 ****** 2026-01-08 01:05:02.425259 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.425265 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.425271 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.425277 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.425283 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.425289 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.425295 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.425301 | orchestrator | 2026-01-08 01:05:02.425316 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-08 01:05:02.425322 | orchestrator | Thursday 08 January 2026 01:02:51 +0000 (0:00:01.218) 0:01:11.069 ****** 2026-01-08 01:05:02.425328 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.425334 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.425340 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.425346 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.425352 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.425358 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.425364 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.425370 | orchestrator | 2026-01-08 01:05:02.425376 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-01-08 01:05:02.425382 | orchestrator | Thursday 08 January 2026 01:02:52 +0000 (0:00:01.652) 0:01:12.721 ****** 2026-01-08 01:05:02.425392 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-08 01:05:02.425399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.425406 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.425412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.425418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.425440 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.425456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.425468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425475 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:02.425490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-08 01:05:02.425496 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425517 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425540 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-08 01:05:02.425612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-08 01:05:02.425619 | orchestrator | 2026-01-08 01:05:02.425626 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-01-08 01:05:02.425632 | orchestrator | Thursday 08 January 2026 01:02:59 +0000 (0:00:07.112) 0:01:19.834 ****** 2026-01-08 01:05:02.425638 | orchestrator | changed: [testbed-manager] => { 2026-01-08 01:05:02.425644 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:02.425650 | orchestrator | } 2026-01-08 01:05:02.425656 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:05:02.425662 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:02.425669 | orchestrator | } 2026-01-08 01:05:02.425674 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:05:02.425680 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:02.425686 | orchestrator | } 2026-01-08 01:05:02.425692 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:05:02.425698 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:02.425705 | orchestrator | } 2026-01-08 01:05:02.425711 | orchestrator | changed: [testbed-node-3] => { 2026-01-08 01:05:02.425717 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:02.425723 | orchestrator | } 2026-01-08 01:05:02.425729 | orchestrator | changed: [testbed-node-4] => { 2026-01-08 01:05:02.425738 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:02.425744 | orchestrator | } 2026-01-08 01:05:02.425750 | orchestrator | changed: [testbed-node-5] => { 2026-01-08 01:05:02.425756 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:02.425762 | orchestrator | } 2026-01-08 01:05:02.425768 | orchestrator | 2026-01-08 01:05:02.425774 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:05:02.425781 | orchestrator | Thursday 08 January 2026 01:03:02 +0000 (0:00:02.296) 0:01:22.130 ****** 2026-01-08 01:05:02.425788 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-server:2025.1', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-08 01:05:02.425798 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.425806 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.425817 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2025.1', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:02.425824 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.425833 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.425854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.425861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.425872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.425879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.425886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.425892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.425903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.425909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.425920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.425926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.425936 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:02.425942 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:02.425949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.425956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.425963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.425973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.425980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.425990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.425996 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:05:02.426003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.426035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-08 01:05:02.426044 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:02.426051 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.426057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.426063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.426070 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:05:02.426121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-08 01:05:02.426129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.426139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-08 01:05:02.426150 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:05:02.426157 | orchestrator | 2026-01-08 01:05:02.426162 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-08 01:05:02.426168 | orchestrator | Thursday 08 January 2026 01:03:05 +0000 (0:00:03.098) 0:01:25.229 ****** 2026-01-08 01:05:02.426175 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-08 01:05:02.426182 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:05:02.426189 | orchestrator | 2026-01-08 01:05:02.426195 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-08 01:05:02.426202 | orchestrator | Thursday 08 January 2026 01:03:08 +0000 (0:00:03.600) 0:01:28.830 ****** 2026-01-08 01:05:02.426208 | orchestrator | 2026-01-08 01:05:02.426214 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-08 01:05:02.426220 | orchestrator | Thursday 08 January 2026 01:03:09 +0000 (0:00:00.207) 0:01:29.037 ****** 2026-01-08 01:05:02.426226 | orchestrator | 2026-01-08 01:05:02.426232 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-08 01:05:02.426238 | orchestrator | Thursday 08 January 2026 01:03:09 +0000 (0:00:00.194) 0:01:29.232 ****** 2026-01-08 01:05:02.426243 | orchestrator | 2026-01-08 01:05:02.426249 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-08 01:05:02.426255 | orchestrator | Thursday 08 January 2026 01:03:09 +0000 (0:00:00.151) 0:01:29.383 ****** 2026-01-08 01:05:02.426261 | orchestrator | 2026-01-08 01:05:02.426267 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-08 01:05:02.426272 | orchestrator | Thursday 08 January 2026 01:03:09 +0000 (0:00:00.174) 0:01:29.557 ****** 2026-01-08 01:05:02.426278 | orchestrator | 2026-01-08 01:05:02.426284 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-08 01:05:02.426290 | orchestrator | Thursday 08 January 2026 01:03:09 +0000 (0:00:00.227) 0:01:29.785 ****** 2026-01-08 01:05:02.426295 | orchestrator | 2026-01-08 01:05:02.426301 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-08 01:05:02.426307 | orchestrator | Thursday 08 January 2026 01:03:10 +0000 (0:00:00.835) 0:01:30.620 ****** 2026-01-08 01:05:02.426313 | orchestrator | 2026-01-08 01:05:02.426318 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-08 01:05:02.426324 | orchestrator | Thursday 08 January 2026 01:03:10 +0000 (0:00:00.220) 0:01:30.841 ****** 2026-01-08 01:05:02.426330 | orchestrator | changed: [testbed-manager] 2026-01-08 01:05:02.426336 | orchestrator | 2026-01-08 01:05:02.426342 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-08 01:05:02.426348 | orchestrator | Thursday 08 January 2026 01:03:24 +0000 (0:00:13.553) 0:01:44.394 ****** 2026-01-08 01:05:02.426354 | orchestrator | changed: [testbed-manager] 2026-01-08 01:05:02.426359 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:02.426365 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:05:02.426371 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:05:02.426377 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:02.426383 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:05:02.426389 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:02.426396 | orchestrator | 2026-01-08 01:05:02.426403 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-08 01:05:02.426410 | orchestrator | Thursday 08 January 2026 01:03:42 +0000 (0:00:17.634) 0:02:02.029 ****** 2026-01-08 01:05:02.426417 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:02.426423 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:02.426430 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:02.426436 | orchestrator | 2026-01-08 01:05:02.426442 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-08 01:05:02.426455 | orchestrator | Thursday 08 January 2026 01:03:55 +0000 (0:00:13.221) 0:02:15.250 ****** 2026-01-08 01:05:02.426461 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:02.426467 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:02.426473 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:02.426479 | orchestrator | 2026-01-08 01:05:02.426485 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-08 01:05:02.426491 | orchestrator | Thursday 08 January 2026 01:04:08 +0000 (0:00:12.902) 0:02:28.153 ****** 2026-01-08 01:05:02.426500 | orchestrator | changed: [testbed-manager] 2026-01-08 01:05:02.426506 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:02.426513 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:02.426519 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:05:02.426525 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:05:02.426531 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:02.426536 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:05:02.426542 | orchestrator | 2026-01-08 01:05:02.426549 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-08 01:05:02.426555 | orchestrator | Thursday 08 January 2026 01:04:23 +0000 (0:00:15.739) 0:02:43.892 ****** 2026-01-08 01:05:02.426561 | orchestrator | changed: [testbed-manager] 2026-01-08 01:05:02.426568 | orchestrator | 2026-01-08 01:05:02.426574 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-08 01:05:02.426580 | orchestrator | Thursday 08 January 2026 01:04:32 +0000 (0:00:08.216) 0:02:52.108 ****** 2026-01-08 01:05:02.426587 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:02.426594 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:02.426600 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:02.426607 | orchestrator | 2026-01-08 01:05:02.426613 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-08 01:05:02.426620 | orchestrator | Thursday 08 January 2026 01:04:44 +0000 (0:00:12.092) 0:03:04.201 ****** 2026-01-08 01:05:02.426626 | orchestrator | changed: [testbed-manager] 2026-01-08 01:05:02.426633 | orchestrator | 2026-01-08 01:05:02.426642 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-08 01:05:02.426648 | orchestrator | Thursday 08 January 2026 01:04:50 +0000 (0:00:06.194) 0:03:10.395 ****** 2026-01-08 01:05:02.426654 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:05:02.426660 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:05:02.426667 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:05:02.426674 | orchestrator | 2026-01-08 01:05:02.426680 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:05:02.426688 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2026-01-08 01:05:02.426695 | orchestrator | testbed-node-0 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-08 01:05:02.426702 | orchestrator | testbed-node-1 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-08 01:05:02.426708 | orchestrator | testbed-node-2 : ok=16  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-08 01:05:02.426715 | orchestrator | testbed-node-3 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-08 01:05:02.426722 | orchestrator | testbed-node-4 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-08 01:05:02.426729 | orchestrator | testbed-node-5 : ok=13  changed=8  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-08 01:05:02.426735 | orchestrator | 2026-01-08 01:05:02.426742 | orchestrator | 2026-01-08 01:05:02.426753 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:05:02.426759 | orchestrator | Thursday 08 January 2026 01:05:00 +0000 (0:00:09.564) 0:03:19.960 ****** 2026-01-08 01:05:02.426766 | orchestrator | =============================================================================== 2026-01-08 01:05:02.426772 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 17.63s 2026-01-08 01:05:02.426778 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.67s 2026-01-08 01:05:02.426785 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.74s 2026-01-08 01:05:02.426791 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.55s 2026-01-08 01:05:02.426797 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 13.22s 2026-01-08 01:05:02.426802 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.90s 2026-01-08 01:05:02.426808 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.09s 2026-01-08 01:05:02.426814 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.57s 2026-01-08 01:05:02.426820 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.22s 2026-01-08 01:05:02.426826 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 7.11s 2026-01-08 01:05:02.426832 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 6.66s 2026-01-08 01:05:02.426838 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.19s 2026-01-08 01:05:02.426856 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.92s 2026-01-08 01:05:02.426862 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.80s 2026-01-08 01:05:02.426868 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.26s 2026-01-08 01:05:02.426874 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 3.60s 2026-01-08 01:05:02.426880 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.47s 2026-01-08 01:05:02.426886 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.20s 2026-01-08 01:05:02.426896 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.10s 2026-01-08 01:05:02.426902 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.85s 2026-01-08 01:05:02.426908 | orchestrator | 2026-01-08 01:05:02 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:02.426914 | orchestrator | 2026-01-08 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:05.475928 | orchestrator | 2026-01-08 01:05:05 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:05.477507 | orchestrator | 2026-01-08 01:05:05 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:05:05.479037 | orchestrator | 2026-01-08 01:05:05 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:05.480401 | orchestrator | 2026-01-08 01:05:05 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:05.480431 | orchestrator | 2026-01-08 01:05:05 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:08.530466 | orchestrator | 2026-01-08 01:05:08 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:08.533564 | orchestrator | 2026-01-08 01:05:08 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:05:08.535758 | orchestrator | 2026-01-08 01:05:08 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:08.538427 | orchestrator | 2026-01-08 01:05:08 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:08.538541 | orchestrator | 2026-01-08 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:11.579811 | orchestrator | 2026-01-08 01:05:11 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:11.584007 | orchestrator | 2026-01-08 01:05:11 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state STARTED 2026-01-08 01:05:11.586624 | orchestrator | 2026-01-08 01:05:11 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:11.590543 | orchestrator | 2026-01-08 01:05:11 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:11.594848 | orchestrator | 2026-01-08 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:14.640641 | orchestrator | 2026-01-08 01:05:14 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:14.643582 | orchestrator | 2026-01-08 01:05:14 | INFO  | Task 974ab045-75c2-4dde-b1dc-c0b18756fad4 is in state SUCCESS 2026-01-08 01:05:14.645269 | orchestrator | 2026-01-08 01:05:14.645303 | orchestrator | 2026-01-08 01:05:14.645307 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:05:14.645311 | orchestrator | 2026-01-08 01:05:14.645315 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:05:14.645318 | orchestrator | Thursday 08 January 2026 01:01:49 +0000 (0:00:00.195) 0:00:00.195 ****** 2026-01-08 01:05:14.645322 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:05:14.645326 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:05:14.645329 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:05:14.645332 | orchestrator | 2026-01-08 01:05:14.645335 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:05:14.645339 | orchestrator | Thursday 08 January 2026 01:01:49 +0000 (0:00:00.265) 0:00:00.461 ****** 2026-01-08 01:05:14.645342 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-08 01:05:14.645346 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-08 01:05:14.645349 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-08 01:05:14.645352 | orchestrator | 2026-01-08 01:05:14.645355 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-08 01:05:14.645358 | orchestrator | 2026-01-08 01:05:14.645362 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-08 01:05:14.645365 | orchestrator | Thursday 08 January 2026 01:01:50 +0000 (0:00:00.404) 0:00:00.866 ****** 2026-01-08 01:05:14.645368 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:05:14.645372 | orchestrator | 2026-01-08 01:05:14.645375 | orchestrator | TASK [service-ks-register : designate | Creating/deleting services] ************ 2026-01-08 01:05:14.645378 | orchestrator | Thursday 08 January 2026 01:01:50 +0000 (0:00:00.678) 0:00:01.544 ****** 2026-01-08 01:05:14.645381 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-08 01:05:14.645384 | orchestrator | 2026-01-08 01:05:14.645388 | orchestrator | TASK [service-ks-register : designate | Creating/deleting endpoints] *********** 2026-01-08 01:05:14.645391 | orchestrator | Thursday 08 January 2026 01:01:55 +0000 (0:00:04.386) 0:00:05.931 ****** 2026-01-08 01:05:14.645394 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-08 01:05:14.645398 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-08 01:05:14.645402 | orchestrator | 2026-01-08 01:05:14.645442 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-08 01:05:14.645448 | orchestrator | Thursday 08 January 2026 01:02:02 +0000 (0:00:06.828) 0:00:12.760 ****** 2026-01-08 01:05:14.645452 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-08 01:05:14.645457 | orchestrator | 2026-01-08 01:05:14.645461 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-08 01:05:14.645501 | orchestrator | Thursday 08 January 2026 01:02:05 +0000 (0:00:03.503) 0:00:16.263 ****** 2026-01-08 01:05:14.645507 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:05:14.645513 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-08 01:05:14.645518 | orchestrator | 2026-01-08 01:05:14.645523 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-08 01:05:14.645528 | orchestrator | Thursday 08 January 2026 01:02:09 +0000 (0:00:03.732) 0:00:19.996 ****** 2026-01-08 01:05:14.645534 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:05:14.645539 | orchestrator | 2026-01-08 01:05:14.645545 | orchestrator | TASK [service-ks-register : designate | Granting/revoking user roles] ********** 2026-01-08 01:05:14.645550 | orchestrator | Thursday 08 January 2026 01:02:12 +0000 (0:00:03.366) 0:00:23.363 ****** 2026-01-08 01:05:14.645555 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-08 01:05:14.645560 | orchestrator | 2026-01-08 01:05:14.645567 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-08 01:05:14.645581 | orchestrator | Thursday 08 January 2026 01:02:17 +0000 (0:00:04.597) 0:00:27.961 ****** 2026-01-08 01:05:14.645588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.645606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.645612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.645623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645722 | orchestrator | 2026-01-08 01:05:14.645730 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-08 01:05:14.645736 | orchestrator | Thursday 08 January 2026 01:02:20 +0000 (0:00:02.877) 0:00:30.838 ****** 2026-01-08 01:05:14.645742 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:14.645747 | orchestrator | 2026-01-08 01:05:14.645752 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-08 01:05:14.645758 | orchestrator | Thursday 08 January 2026 01:02:20 +0000 (0:00:00.155) 0:00:30.994 ****** 2026-01-08 01:05:14.645764 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:14.645770 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:14.645773 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:14.645776 | orchestrator | 2026-01-08 01:05:14.645780 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-08 01:05:14.645783 | orchestrator | Thursday 08 January 2026 01:02:20 +0000 (0:00:00.303) 0:00:31.298 ****** 2026-01-08 01:05:14.645786 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:05:14.645789 | orchestrator | 2026-01-08 01:05:14.645792 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-08 01:05:14.645796 | orchestrator | Thursday 08 January 2026 01:02:21 +0000 (0:00:00.760) 0:00:32.058 ****** 2026-01-08 01:05:14.645802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.645808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.645812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.645817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.645820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646184 | orchestrator | 2026-01-08 01:05:14.646189 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-08 01:05:14.646194 | orchestrator | Thursday 08 January 2026 01:02:27 +0000 (0:00:06.037) 0:00:38.096 ****** 2026-01-08 01:05:14.646201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.646210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.646213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.646219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.646234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.646237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.646252 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:14.646256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646276 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:14.646281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646290 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:14.646293 | orchestrator | 2026-01-08 01:05:14.646296 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-08 01:05:14.646300 | orchestrator | Thursday 08 January 2026 01:02:30 +0000 (0:00:02.771) 0:00:40.867 ****** 2026-01-08 01:05:14.646305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.646309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.646312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.646317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.646320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646344 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:14.646349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.646356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.646359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646373 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:14.646378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.646390 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:14.646398 | orchestrator | 2026-01-08 01:05:14.646404 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-08 01:05:14.646412 | orchestrator | Thursday 08 January 2026 01:02:33 +0000 (0:00:02.971) 0:00:43.838 ****** 2026-01-08 01:05:14.646417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.646423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.646430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.646479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646841 | orchestrator | 2026-01-08 01:05:14.646845 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-08 01:05:14.646850 | orchestrator | Thursday 08 January 2026 01:02:40 +0000 (0:00:07.584) 0:00:51.423 ****** 2026-01-08 01:05:14.646875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.646880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.646888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.646892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.646970 | orchestrator | 2026-01-08 01:05:14.646975 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-08 01:05:14.646979 | orchestrator | Thursday 08 January 2026 01:03:04 +0000 (0:00:24.056) 0:01:15.479 ****** 2026-01-08 01:05:14.647003 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-08 01:05:14.647007 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-08 01:05:14.647010 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-08 01:05:14.647013 | orchestrator | 2026-01-08 01:05:14.647016 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-08 01:05:14.647019 | orchestrator | Thursday 08 January 2026 01:03:12 +0000 (0:00:07.501) 0:01:22.981 ****** 2026-01-08 01:05:14.647023 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-08 01:05:14.647026 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-08 01:05:14.647029 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-08 01:05:14.647032 | orchestrator | 2026-01-08 01:05:14.647035 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-08 01:05:14.647038 | orchestrator | Thursday 08 January 2026 01:03:16 +0000 (0:00:03.822) 0:01:26.803 ****** 2026-01-08 01:05:14.647044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647287 | orchestrator | 2026-01-08 01:05:14.647291 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-08 01:05:14.647294 | orchestrator | Thursday 08 January 2026 01:03:20 +0000 (0:00:04.385) 0:01:31.188 ****** 2026-01-08 01:05:14.647298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647446 | orchestrator | 2026-01-08 01:05:14.647451 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-08 01:05:14.647457 | orchestrator | Thursday 08 January 2026 01:03:24 +0000 (0:00:03.891) 0:01:35.080 ****** 2026-01-08 01:05:14.647463 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:14.647466 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:14.647469 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:14.647472 | orchestrator | 2026-01-08 01:05:14.647476 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-08 01:05:14.647479 | orchestrator | Thursday 08 January 2026 01:03:25 +0000 (0:00:01.397) 0:01:36.478 ****** 2026-01-08 01:05:14.647482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.647491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647510 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:14.647513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.647521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647540 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:14.647543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.647552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647570 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:14.647573 | orchestrator | 2026-01-08 01:05:14.647576 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-01-08 01:05:14.647579 | orchestrator | Thursday 08 January 2026 01:03:28 +0000 (0:00:03.017) 0:01:39.495 ****** 2026-01-08 01:05:14.647583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.647588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.647597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:05:14.647616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:05:14.647704 | orchestrator | 2026-01-08 01:05:14.647710 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-01-08 01:05:14.647715 | orchestrator | Thursday 08 January 2026 01:03:36 +0000 (0:00:07.683) 0:01:47.179 ****** 2026-01-08 01:05:14.647720 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:05:14.647725 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:14.647730 | orchestrator | } 2026-01-08 01:05:14.647735 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:05:14.647740 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:14.647745 | orchestrator | } 2026-01-08 01:05:14.647749 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:05:14.647754 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:05:14.647759 | orchestrator | } 2026-01-08 01:05:14.647765 | orchestrator | 2026-01-08 01:05:14.647770 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:05:14.647780 | orchestrator | Thursday 08 January 2026 01:03:36 +0000 (0:00:00.303) 0:01:47.482 ****** 2026-01-08 01:05:14.647787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.647804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647832 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:14.647837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.647847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647903 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:14.647908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2025.1', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:05:14.647912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2025.1', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-08 01:05:14.647918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2025.1', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2025.1', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2025.1', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2025.1', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:05:14.647933 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:14.647936 | orchestrator | 2026-01-08 01:05:14.647939 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-08 01:05:14.647943 | orchestrator | Thursday 08 January 2026 01:03:38 +0000 (0:00:02.176) 0:01:49.658 ****** 2026-01-08 01:05:14.647946 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:05:14.647949 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:05:14.647952 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:05:14.647955 | orchestrator | 2026-01-08 01:05:14.647958 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-08 01:05:14.647961 | orchestrator | Thursday 08 January 2026 01:03:39 +0000 (0:00:00.308) 0:01:49.967 ****** 2026-01-08 01:05:14.647966 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-08 01:05:14.647970 | orchestrator | 2026-01-08 01:05:14.647973 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-08 01:05:14.647976 | orchestrator | Thursday 08 January 2026 01:03:41 +0000 (0:00:02.261) 0:01:52.229 ****** 2026-01-08 01:05:14.647979 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-08 01:05:14.647982 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-08 01:05:14.647985 | orchestrator | 2026-01-08 01:05:14.647989 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-08 01:05:14.647992 | orchestrator | Thursday 08 January 2026 01:03:43 +0000 (0:00:02.187) 0:01:54.416 ****** 2026-01-08 01:05:14.647995 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:14.647998 | orchestrator | 2026-01-08 01:05:14.648001 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-08 01:05:14.648004 | orchestrator | Thursday 08 January 2026 01:03:59 +0000 (0:00:15.356) 0:02:09.772 ****** 2026-01-08 01:05:14.648007 | orchestrator | 2026-01-08 01:05:14.648011 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-08 01:05:14.648014 | orchestrator | Thursday 08 January 2026 01:03:59 +0000 (0:00:00.090) 0:02:09.863 ****** 2026-01-08 01:05:14.648017 | orchestrator | 2026-01-08 01:05:14.648020 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-08 01:05:14.648023 | orchestrator | Thursday 08 January 2026 01:03:59 +0000 (0:00:00.071) 0:02:09.935 ****** 2026-01-08 01:05:14.648026 | orchestrator | 2026-01-08 01:05:14.648029 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-08 01:05:14.648033 | orchestrator | Thursday 08 January 2026 01:03:59 +0000 (0:00:00.107) 0:02:10.042 ****** 2026-01-08 01:05:14.648036 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:14.648039 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:14.648042 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:14.648045 | orchestrator | 2026-01-08 01:05:14.648050 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-08 01:05:14.648053 | orchestrator | Thursday 08 January 2026 01:04:07 +0000 (0:00:08.434) 0:02:18.476 ****** 2026-01-08 01:05:14.648057 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:14.648060 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:14.648063 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:14.648066 | orchestrator | 2026-01-08 01:05:14.648069 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-08 01:05:14.648080 | orchestrator | Thursday 08 January 2026 01:04:22 +0000 (0:00:14.816) 0:02:33.293 ****** 2026-01-08 01:05:14.648086 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:14.648091 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:14.648096 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:14.648100 | orchestrator | 2026-01-08 01:05:14.648105 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-08 01:05:14.648110 | orchestrator | Thursday 08 January 2026 01:04:32 +0000 (0:00:09.446) 0:02:42.739 ****** 2026-01-08 01:05:14.648115 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:14.648119 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:14.648124 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:14.648130 | orchestrator | 2026-01-08 01:05:14.648134 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-08 01:05:14.648139 | orchestrator | Thursday 08 January 2026 01:04:44 +0000 (0:00:12.302) 0:02:55.042 ****** 2026-01-08 01:05:14.648145 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:14.648150 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:14.648154 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:14.648159 | orchestrator | 2026-01-08 01:05:14.648165 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-08 01:05:14.648170 | orchestrator | Thursday 08 January 2026 01:04:53 +0000 (0:00:09.481) 0:03:04.524 ****** 2026-01-08 01:05:14.648175 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:05:14.648180 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:14.648185 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:05:14.648190 | orchestrator | 2026-01-08 01:05:14.648195 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-08 01:05:14.648201 | orchestrator | Thursday 08 January 2026 01:05:04 +0000 (0:00:10.394) 0:03:14.919 ****** 2026-01-08 01:05:14.648207 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:05:14.648213 | orchestrator | 2026-01-08 01:05:14.648219 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:05:14.648225 | orchestrator | testbed-node-0 : ok=30  changed=24  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-08 01:05:14.648231 | orchestrator | testbed-node-1 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:05:14.648237 | orchestrator | testbed-node-2 : ok=20  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:05:14.648242 | orchestrator | 2026-01-08 01:05:14.648248 | orchestrator | 2026-01-08 01:05:14.648254 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:05:14.648259 | orchestrator | Thursday 08 January 2026 01:05:11 +0000 (0:00:07.078) 0:03:21.997 ****** 2026-01-08 01:05:14.648265 | orchestrator | =============================================================================== 2026-01-08 01:05:14.648270 | orchestrator | designate : Copying over designate.conf -------------------------------- 24.06s 2026-01-08 01:05:14.648275 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.36s 2026-01-08 01:05:14.648279 | orchestrator | designate : Restart designate-api container ---------------------------- 14.82s 2026-01-08 01:05:14.648284 | orchestrator | designate : Restart designate-producer container ----------------------- 12.30s 2026-01-08 01:05:14.648289 | orchestrator | designate : Restart designate-worker container ------------------------- 10.39s 2026-01-08 01:05:14.648299 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.48s 2026-01-08 01:05:14.648305 | orchestrator | designate : Restart designate-central container ------------------------- 9.45s 2026-01-08 01:05:14.648310 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.43s 2026-01-08 01:05:14.648316 | orchestrator | service-check-containers : designate | Check containers ----------------- 7.68s 2026-01-08 01:05:14.648328 | orchestrator | designate : Copying over config.json files for services ----------------- 7.58s 2026-01-08 01:05:14.648334 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.50s 2026-01-08 01:05:14.648339 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.08s 2026-01-08 01:05:14.648342 | orchestrator | service-ks-register : designate | Creating/deleting endpoints ----------- 6.83s 2026-01-08 01:05:14.648345 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.04s 2026-01-08 01:05:14.648348 | orchestrator | service-ks-register : designate | Granting/revoking user roles ---------- 4.60s 2026-01-08 01:05:14.648351 | orchestrator | service-ks-register : designate | Creating/deleting services ------------ 4.39s 2026-01-08 01:05:14.648354 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.39s 2026-01-08 01:05:14.648358 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.89s 2026-01-08 01:05:14.648361 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.82s 2026-01-08 01:05:14.648366 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.73s 2026-01-08 01:05:14.648371 | orchestrator | 2026-01-08 01:05:14 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:14.648380 | orchestrator | 2026-01-08 01:05:14 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:14.648385 | orchestrator | 2026-01-08 01:05:14 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:14.648391 | orchestrator | 2026-01-08 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:17.681893 | orchestrator | 2026-01-08 01:05:17 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:17.682220 | orchestrator | 2026-01-08 01:05:17 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:17.682829 | orchestrator | 2026-01-08 01:05:17 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:17.683422 | orchestrator | 2026-01-08 01:05:17 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:17.683517 | orchestrator | 2026-01-08 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:20.716233 | orchestrator | 2026-01-08 01:05:20 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:20.716402 | orchestrator | 2026-01-08 01:05:20 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:20.718602 | orchestrator | 2026-01-08 01:05:20 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:20.719477 | orchestrator | 2026-01-08 01:05:20 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:20.719509 | orchestrator | 2026-01-08 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:23.758928 | orchestrator | 2026-01-08 01:05:23 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:23.760878 | orchestrator | 2026-01-08 01:05:23 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:23.762258 | orchestrator | 2026-01-08 01:05:23 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:23.763622 | orchestrator | 2026-01-08 01:05:23 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:23.763665 | orchestrator | 2026-01-08 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:26.805458 | orchestrator | 2026-01-08 01:05:26 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:26.806449 | orchestrator | 2026-01-08 01:05:26 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:26.809533 | orchestrator | 2026-01-08 01:05:26 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:26.809589 | orchestrator | 2026-01-08 01:05:26 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:26.809680 | orchestrator | 2026-01-08 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:29.846541 | orchestrator | 2026-01-08 01:05:29 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:29.847048 | orchestrator | 2026-01-08 01:05:29 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:29.849369 | orchestrator | 2026-01-08 01:05:29 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:29.850000 | orchestrator | 2026-01-08 01:05:29 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:29.850036 | orchestrator | 2026-01-08 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:32.895816 | orchestrator | 2026-01-08 01:05:32 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:32.896712 | orchestrator | 2026-01-08 01:05:32 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:32.897616 | orchestrator | 2026-01-08 01:05:32 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:32.900047 | orchestrator | 2026-01-08 01:05:32 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:32.900079 | orchestrator | 2026-01-08 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:35.934812 | orchestrator | 2026-01-08 01:05:35 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:35.936504 | orchestrator | 2026-01-08 01:05:35 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:35.938406 | orchestrator | 2026-01-08 01:05:35 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:35.939331 | orchestrator | 2026-01-08 01:05:35 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:35.939365 | orchestrator | 2026-01-08 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:38.972596 | orchestrator | 2026-01-08 01:05:38 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:38.973606 | orchestrator | 2026-01-08 01:05:38 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:38.974279 | orchestrator | 2026-01-08 01:05:38 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:38.975031 | orchestrator | 2026-01-08 01:05:38 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:38.975066 | orchestrator | 2026-01-08 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:42.015498 | orchestrator | 2026-01-08 01:05:42 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:42.015574 | orchestrator | 2026-01-08 01:05:42 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:42.020510 | orchestrator | 2026-01-08 01:05:42 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:42.023240 | orchestrator | 2026-01-08 01:05:42 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:42.023317 | orchestrator | 2026-01-08 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:45.076016 | orchestrator | 2026-01-08 01:05:45 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:45.078333 | orchestrator | 2026-01-08 01:05:45 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:45.080505 | orchestrator | 2026-01-08 01:05:45 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:45.082072 | orchestrator | 2026-01-08 01:05:45 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:45.082156 | orchestrator | 2026-01-08 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:48.126148 | orchestrator | 2026-01-08 01:05:48 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:48.127671 | orchestrator | 2026-01-08 01:05:48 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state STARTED 2026-01-08 01:05:48.128818 | orchestrator | 2026-01-08 01:05:48 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:48.130424 | orchestrator | 2026-01-08 01:05:48 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:48.130496 | orchestrator | 2026-01-08 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:51.173714 | orchestrator | 2026-01-08 01:05:51 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:51.173773 | orchestrator | 2026-01-08 01:05:51 | INFO  | Task 7ec0f138-e0b3-43f1-b064-0e89761b9dc7 is in state SUCCESS 2026-01-08 01:05:51.176572 | orchestrator | 2026-01-08 01:05:51 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:51.176614 | orchestrator | 2026-01-08 01:05:51 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:51.176915 | orchestrator | 2026-01-08 01:05:51 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:05:51.176948 | orchestrator | 2026-01-08 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:54.226471 | orchestrator | 2026-01-08 01:05:54 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:54.228978 | orchestrator | 2026-01-08 01:05:54 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:54.229854 | orchestrator | 2026-01-08 01:05:54 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:54.234201 | orchestrator | 2026-01-08 01:05:54 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:05:54.234251 | orchestrator | 2026-01-08 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:05:57.290524 | orchestrator | 2026-01-08 01:05:57 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:05:57.293419 | orchestrator | 2026-01-08 01:05:57 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:05:57.294870 | orchestrator | 2026-01-08 01:05:57 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:05:57.296402 | orchestrator | 2026-01-08 01:05:57 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:05:57.296569 | orchestrator | 2026-01-08 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:00.333865 | orchestrator | 2026-01-08 01:06:00 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:06:00.334708 | orchestrator | 2026-01-08 01:06:00 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:06:00.335937 | orchestrator | 2026-01-08 01:06:00 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:00.337253 | orchestrator | 2026-01-08 01:06:00 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:00.337285 | orchestrator | 2026-01-08 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:03.385505 | orchestrator | 2026-01-08 01:06:03 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:06:03.388281 | orchestrator | 2026-01-08 01:06:03 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:06:03.390166 | orchestrator | 2026-01-08 01:06:03 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:03.392401 | orchestrator | 2026-01-08 01:06:03 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:03.392466 | orchestrator | 2026-01-08 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:06.448921 | orchestrator | 2026-01-08 01:06:06 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state STARTED 2026-01-08 01:06:06.449826 | orchestrator | 2026-01-08 01:06:06 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:06:06.452094 | orchestrator | 2026-01-08 01:06:06 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:06.453381 | orchestrator | 2026-01-08 01:06:06 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:06.453423 | orchestrator | 2026-01-08 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:09.508465 | orchestrator | 2026-01-08 01:06:09 | INFO  | Task d67c75cb-323f-47e7-8090-7cada9e6f669 is in state SUCCESS 2026-01-08 01:06:09.509473 | orchestrator | 2026-01-08 01:06:09.509522 | orchestrator | 2026-01-08 01:06:09.509530 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:06:09.509536 | orchestrator | 2026-01-08 01:06:09.509542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:06:09.509548 | orchestrator | Thursday 08 January 2026 01:05:16 +0000 (0:00:00.298) 0:00:00.298 ****** 2026-01-08 01:06:09.509554 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:06:09.509560 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:06:09.509566 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:06:09.509572 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:06:09.509577 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:06:09.509589 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:06:09.509595 | orchestrator | ok: [testbed-manager] 2026-01-08 01:06:09.509601 | orchestrator | 2026-01-08 01:06:09.509607 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:06:09.509622 | orchestrator | Thursday 08 January 2026 01:05:17 +0000 (0:00:01.046) 0:00:01.344 ****** 2026-01-08 01:06:09.509628 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-08 01:06:09.509634 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-08 01:06:09.509639 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-08 01:06:09.509645 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-08 01:06:09.509650 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-08 01:06:09.509656 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-08 01:06:09.509662 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-08 01:06:09.509667 | orchestrator | 2026-01-08 01:06:09.509673 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-08 01:06:09.509678 | orchestrator | 2026-01-08 01:06:09.509683 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-08 01:06:09.509688 | orchestrator | Thursday 08 January 2026 01:05:18 +0000 (0:00:00.851) 0:00:02.196 ****** 2026-01-08 01:06:09.509709 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-01-08 01:06:09.509716 | orchestrator | 2026-01-08 01:06:09.509722 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting services] ************* 2026-01-08 01:06:09.509727 | orchestrator | Thursday 08 January 2026 01:05:19 +0000 (0:00:01.467) 0:00:03.663 ****** 2026-01-08 01:06:09.509733 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-01-08 01:06:09.509738 | orchestrator | 2026-01-08 01:06:09.509744 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating/deleting endpoints] ************ 2026-01-08 01:06:09.509749 | orchestrator | Thursday 08 January 2026 01:05:23 +0000 (0:00:03.406) 0:00:07.069 ****** 2026-01-08 01:06:09.509755 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-08 01:06:09.509762 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-08 01:06:09.509767 | orchestrator | 2026-01-08 01:06:09.509773 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-08 01:06:09.509778 | orchestrator | Thursday 08 January 2026 01:05:29 +0000 (0:00:06.400) 0:00:13.470 ****** 2026-01-08 01:06:09.509784 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-08 01:06:09.509789 | orchestrator | 2026-01-08 01:06:09.509796 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-08 01:06:09.509801 | orchestrator | Thursday 08 January 2026 01:05:32 +0000 (0:00:03.143) 0:00:16.613 ****** 2026-01-08 01:06:09.509807 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:06:09.510009 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-01-08 01:06:09.510049 | orchestrator | 2026-01-08 01:06:09.510055 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-08 01:06:09.510061 | orchestrator | Thursday 08 January 2026 01:05:36 +0000 (0:00:03.838) 0:00:20.452 ****** 2026-01-08 01:06:09.510067 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:06:09.510074 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-01-08 01:06:09.510080 | orchestrator | 2026-01-08 01:06:09.510086 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting/revoking user roles] *********** 2026-01-08 01:06:09.510092 | orchestrator | Thursday 08 January 2026 01:05:42 +0000 (0:00:06.549) 0:00:27.001 ****** 2026-01-08 01:06:09.510098 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-01-08 01:06:09.510105 | orchestrator | 2026-01-08 01:06:09.510110 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:06:09.510116 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:06:09.510124 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:06:09.510132 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:06:09.510140 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:06:09.510148 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:06:09.510178 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:06:09.510183 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:06:09.510196 | orchestrator | 2026-01-08 01:06:09.510201 | orchestrator | 2026-01-08 01:06:09.510207 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:06:09.510212 | orchestrator | Thursday 08 January 2026 01:05:48 +0000 (0:00:05.228) 0:00:32.230 ****** 2026-01-08 01:06:09.510217 | orchestrator | =============================================================================== 2026-01-08 01:06:09.510222 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.55s 2026-01-08 01:06:09.510238 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting endpoints ------------ 6.40s 2026-01-08 01:06:09.510250 | orchestrator | service-ks-register : ceph-rgw | Granting/revoking user roles ----------- 5.23s 2026-01-08 01:06:09.510256 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.84s 2026-01-08 01:06:09.510261 | orchestrator | service-ks-register : ceph-rgw | Creating/deleting services ------------- 3.41s 2026-01-08 01:06:09.510267 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.14s 2026-01-08 01:06:09.510272 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.47s 2026-01-08 01:06:09.510278 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2026-01-08 01:06:09.510283 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2026-01-08 01:06:09.510289 | orchestrator | 2026-01-08 01:06:09.510294 | orchestrator | 2026-01-08 01:06:09.510299 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:06:09.510305 | orchestrator | 2026-01-08 01:06:09.510311 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:06:09.510316 | orchestrator | Thursday 08 January 2026 01:04:58 +0000 (0:00:00.258) 0:00:00.258 ****** 2026-01-08 01:06:09.510322 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:06:09.510328 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:06:09.510333 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:06:09.510339 | orchestrator | 2026-01-08 01:06:09.510345 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:06:09.510350 | orchestrator | Thursday 08 January 2026 01:04:59 +0000 (0:00:00.309) 0:00:00.568 ****** 2026-01-08 01:06:09.510356 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-08 01:06:09.510362 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-08 01:06:09.510367 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-08 01:06:09.510373 | orchestrator | 2026-01-08 01:06:09.510379 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-08 01:06:09.510385 | orchestrator | 2026-01-08 01:06:09.510390 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-08 01:06:09.510396 | orchestrator | Thursday 08 January 2026 01:04:59 +0000 (0:00:00.463) 0:00:01.032 ****** 2026-01-08 01:06:09.510401 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:06:09.510407 | orchestrator | 2026-01-08 01:06:09.510413 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-01-08 01:06:09.510418 | orchestrator | Thursday 08 January 2026 01:05:00 +0000 (0:00:00.520) 0:00:01.553 ****** 2026-01-08 01:06:09.510424 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-08 01:06:09.510429 | orchestrator | 2026-01-08 01:06:09.510435 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-01-08 01:06:09.510440 | orchestrator | Thursday 08 January 2026 01:05:03 +0000 (0:00:03.306) 0:00:04.859 ****** 2026-01-08 01:06:09.510446 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-08 01:06:09.510451 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-08 01:06:09.510457 | orchestrator | 2026-01-08 01:06:09.510463 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-08 01:06:09.510468 | orchestrator | Thursday 08 January 2026 01:05:09 +0000 (0:00:06.216) 0:00:11.077 ****** 2026-01-08 01:06:09.510479 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-08 01:06:09.510484 | orchestrator | 2026-01-08 01:06:09.510490 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-08 01:06:09.510495 | orchestrator | Thursday 08 January 2026 01:05:12 +0000 (0:00:03.208) 0:00:14.285 ****** 2026-01-08 01:06:09.510501 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:06:09.510507 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-08 01:06:09.510512 | orchestrator | 2026-01-08 01:06:09.510518 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-08 01:06:09.510523 | orchestrator | Thursday 08 January 2026 01:05:16 +0000 (0:00:03.464) 0:00:17.749 ****** 2026-01-08 01:06:09.510529 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:06:09.510534 | orchestrator | 2026-01-08 01:06:09.510540 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-01-08 01:06:09.510545 | orchestrator | Thursday 08 January 2026 01:05:19 +0000 (0:00:02.880) 0:00:20.629 ****** 2026-01-08 01:06:09.510551 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-08 01:06:09.510557 | orchestrator | 2026-01-08 01:06:09.510562 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-08 01:06:09.510569 | orchestrator | Thursday 08 January 2026 01:05:22 +0000 (0:00:03.668) 0:00:24.297 ****** 2026-01-08 01:06:09.510575 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:09.510581 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:09.510588 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:09.510594 | orchestrator | 2026-01-08 01:06:09.510607 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-08 01:06:09.510614 | orchestrator | Thursday 08 January 2026 01:05:23 +0000 (0:00:00.304) 0:00:24.602 ****** 2026-01-08 01:06:09.510626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.510635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.510647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.510654 | orchestrator | 2026-01-08 01:06:09.510660 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-08 01:06:09.510666 | orchestrator | Thursday 08 January 2026 01:05:23 +0000 (0:00:00.824) 0:00:25.426 ****** 2026-01-08 01:06:09.510672 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:09.510678 | orchestrator | 2026-01-08 01:06:09.510685 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-08 01:06:09.510690 | orchestrator | Thursday 08 January 2026 01:05:24 +0000 (0:00:00.129) 0:00:25.555 ****** 2026-01-08 01:06:09.510695 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:09.510701 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:09.510707 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:09.510713 | orchestrator | 2026-01-08 01:06:09.510719 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-08 01:06:09.510725 | orchestrator | Thursday 08 January 2026 01:05:24 +0000 (0:00:00.480) 0:00:26.036 ****** 2026-01-08 01:06:09.510732 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:06:09.510738 | orchestrator | 2026-01-08 01:06:09.510744 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-08 01:06:09.510754 | orchestrator | Thursday 08 January 2026 01:05:25 +0000 (0:00:00.588) 0:00:26.624 ****** 2026-01-08 01:06:09.510764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.510772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.510783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.510790 | orchestrator | 2026-01-08 01:06:09.510796 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-08 01:06:09.510802 | orchestrator | Thursday 08 January 2026 01:05:26 +0000 (0:00:01.475) 0:00:28.100 ****** 2026-01-08 01:06:09.510814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.510821 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:09.510831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.510838 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:09.510848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.510854 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:09.510860 | orchestrator | 2026-01-08 01:06:09.510866 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-08 01:06:09.510872 | orchestrator | Thursday 08 January 2026 01:05:27 +0000 (0:00:00.722) 0:00:28.823 ****** 2026-01-08 01:06:09.510878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.510885 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:09.510900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.510922 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:09.510929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.510940 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:09.510946 | orchestrator | 2026-01-08 01:06:09.511033 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-08 01:06:09.511040 | orchestrator | Thursday 08 January 2026 01:05:28 +0000 (0:00:00.711) 0:00:29.534 ****** 2026-01-08 01:06:09.511047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.511054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.511068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.511082 | orchestrator | 2026-01-08 01:06:09.511088 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-08 01:06:09.511093 | orchestrator | Thursday 08 January 2026 01:05:29 +0000 (0:00:01.384) 0:00:30.919 ****** 2026-01-08 01:06:09.511100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.511106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.511116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.511122 | orchestrator | 2026-01-08 01:06:09.511127 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-08 01:06:09.511133 | orchestrator | Thursday 08 January 2026 01:05:31 +0000 (0:00:02.378) 0:00:33.298 ****** 2026-01-08 01:06:09.511138 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-01-08 01:06:09.511144 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:09.511150 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-01-08 01:06:09.511159 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:09.511167 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-01-08 01:06:09.511173 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:09.511179 | orchestrator | 2026-01-08 01:06:09.511185 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-01-08 01:06:09.511191 | orchestrator | Thursday 08 January 2026 01:05:32 +0000 (0:00:00.507) 0:00:33.806 ****** 2026-01-08 01:06:09.511197 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:06:09.511203 | orchestrator | 2026-01-08 01:06:09.511208 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-01-08 01:06:09.511214 | orchestrator | Thursday 08 January 2026 01:05:33 +0000 (0:00:00.739) 0:00:34.545 ****** 2026-01-08 01:06:09.511220 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:09.511226 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:06:09.511231 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:06:09.511237 | orchestrator | 2026-01-08 01:06:09.511243 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-08 01:06:09.511249 | orchestrator | Thursday 08 January 2026 01:05:35 +0000 (0:00:02.067) 0:00:36.613 ****** 2026-01-08 01:06:09.511254 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:09.511260 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:06:09.511266 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:06:09.511272 | orchestrator | 2026-01-08 01:06:09.511277 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-08 01:06:09.511283 | orchestrator | Thursday 08 January 2026 01:05:36 +0000 (0:00:01.498) 0:00:38.112 ****** 2026-01-08 01:06:09.511289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.511295 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:09.511301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.511307 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:09.511320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.511332 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:09.511338 | orchestrator | 2026-01-08 01:06:09.511344 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-01-08 01:06:09.511349 | orchestrator | Thursday 08 January 2026 01:05:37 +0000 (0:00:00.794) 0:00:38.907 ****** 2026-01-08 01:06:09.511355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.511362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.511368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-08 01:06:09.511378 | orchestrator | 2026-01-08 01:06:09.511387 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-01-08 01:06:09.511393 | orchestrator | Thursday 08 January 2026 01:05:38 +0000 (0:00:01.489) 0:00:40.397 ****** 2026-01-08 01:06:09.511399 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:06:09.511405 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:09.511411 | orchestrator | } 2026-01-08 01:06:09.511417 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:06:09.511423 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:09.511428 | orchestrator | } 2026-01-08 01:06:09.511434 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:06:09.511440 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:09.511445 | orchestrator | } 2026-01-08 01:06:09.511451 | orchestrator | 2026-01-08 01:06:09.511457 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:06:09.511466 | orchestrator | Thursday 08 January 2026 01:05:39 +0000 (0:00:00.644) 0:00:41.041 ****** 2026-01-08 01:06:09.511472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.511479 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:09.511485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.511491 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:09.511497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-08 01:06:09.511506 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:09.511512 | orchestrator | 2026-01-08 01:06:09.511517 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-08 01:06:09.511523 | orchestrator | Thursday 08 January 2026 01:05:40 +0000 (0:00:01.194) 0:00:42.235 ****** 2026-01-08 01:06:09.511529 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:09.511534 | orchestrator | 2026-01-08 01:06:09.511543 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-08 01:06:09.511549 | orchestrator | Thursday 08 January 2026 01:05:43 +0000 (0:00:02.403) 0:00:44.639 ****** 2026-01-08 01:06:09.511555 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:09.511561 | orchestrator | 2026-01-08 01:06:09.511566 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-08 01:06:09.511572 | orchestrator | Thursday 08 January 2026 01:05:45 +0000 (0:00:02.290) 0:00:46.929 ****** 2026-01-08 01:06:09.511577 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:09.511584 | orchestrator | 2026-01-08 01:06:09.511589 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-08 01:06:09.511595 | orchestrator | Thursday 08 January 2026 01:05:58 +0000 (0:00:12.969) 0:00:59.899 ****** 2026-01-08 01:06:09.511600 | orchestrator | 2026-01-08 01:06:09.511606 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-08 01:06:09.511613 | orchestrator | Thursday 08 January 2026 01:05:58 +0000 (0:00:00.063) 0:00:59.963 ****** 2026-01-08 01:06:09.511619 | orchestrator | 2026-01-08 01:06:09.511625 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-08 01:06:09.511631 | orchestrator | Thursday 08 January 2026 01:05:58 +0000 (0:00:00.264) 0:01:00.228 ****** 2026-01-08 01:06:09.511636 | orchestrator | 2026-01-08 01:06:09.511642 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-08 01:06:09.511648 | orchestrator | Thursday 08 January 2026 01:05:58 +0000 (0:00:00.069) 0:01:00.297 ****** 2026-01-08 01:06:09.511653 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:06:09.511659 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:09.511665 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:06:09.511671 | orchestrator | 2026-01-08 01:06:09.511677 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:06:09.511684 | orchestrator | testbed-node-0 : ok=23  changed=16  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-08 01:06:09.511689 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:06:09.511695 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:06:09.511701 | orchestrator | 2026-01-08 01:06:09.511706 | orchestrator | 2026-01-08 01:06:09.511713 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:06:09.511719 | orchestrator | Thursday 08 January 2026 01:06:08 +0000 (0:00:09.794) 0:01:10.092 ****** 2026-01-08 01:06:09.511725 | orchestrator | =============================================================================== 2026-01-08 01:06:09.511736 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.97s 2026-01-08 01:06:09.511742 | orchestrator | placement : Restart placement-api container ----------------------------- 9.79s 2026-01-08 01:06:09.511748 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 6.22s 2026-01-08 01:06:09.511754 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 3.67s 2026-01-08 01:06:09.511760 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.46s 2026-01-08 01:06:09.511767 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 3.31s 2026-01-08 01:06:09.511773 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.21s 2026-01-08 01:06:09.511779 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.88s 2026-01-08 01:06:09.511785 | orchestrator | placement : Creating placement databases -------------------------------- 2.40s 2026-01-08 01:06:09.511791 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.38s 2026-01-08 01:06:09.511798 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.29s 2026-01-08 01:06:09.511804 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.07s 2026-01-08 01:06:09.511810 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.50s 2026-01-08 01:06:09.511816 | orchestrator | service-check-containers : placement | Check containers ----------------- 1.49s 2026-01-08 01:06:09.511822 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.48s 2026-01-08 01:06:09.511828 | orchestrator | placement : Copying over config.json files for services ----------------- 1.38s 2026-01-08 01:06:09.511835 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.19s 2026-01-08 01:06:09.511841 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.82s 2026-01-08 01:06:09.511847 | orchestrator | placement : Copying over existing policy file --------------------------- 0.79s 2026-01-08 01:06:09.511854 | orchestrator | Configure uWSGI for Placement ------------------------------------------- 0.74s 2026-01-08 01:06:09.511860 | orchestrator | 2026-01-08 01:06:09 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:06:09.511866 | orchestrator | 2026-01-08 01:06:09 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:09.513243 | orchestrator | 2026-01-08 01:06:09 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:09.513545 | orchestrator | 2026-01-08 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:12.567470 | orchestrator | 2026-01-08 01:06:12 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:06:12.569885 | orchestrator | 2026-01-08 01:06:12 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:12.570928 | orchestrator | 2026-01-08 01:06:12 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:12.574103 | orchestrator | 2026-01-08 01:06:12 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:12.575388 | orchestrator | 2026-01-08 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:15.615004 | orchestrator | 2026-01-08 01:06:15 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:06:15.617040 | orchestrator | 2026-01-08 01:06:15 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:15.619097 | orchestrator | 2026-01-08 01:06:15 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:15.620548 | orchestrator | 2026-01-08 01:06:15 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:15.620590 | orchestrator | 2026-01-08 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:18.670872 | orchestrator | 2026-01-08 01:06:18 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:06:18.672465 | orchestrator | 2026-01-08 01:06:18 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:18.674092 | orchestrator | 2026-01-08 01:06:18 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:18.675865 | orchestrator | 2026-01-08 01:06:18 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:18.676018 | orchestrator | 2026-01-08 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:21.715565 | orchestrator | 2026-01-08 01:06:21 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state STARTED 2026-01-08 01:06:21.716266 | orchestrator | 2026-01-08 01:06:21 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:21.718002 | orchestrator | 2026-01-08 01:06:21 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:21.719808 | orchestrator | 2026-01-08 01:06:21 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:21.720819 | orchestrator | 2026-01-08 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:24.751450 | orchestrator | 2026-01-08 01:06:24 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:24.753759 | orchestrator | 2026-01-08 01:06:24 | INFO  | Task 763c019c-5038-4f6f-9cfe-5b5285d8d91f is in state SUCCESS 2026-01-08 01:06:24.755041 | orchestrator | 2026-01-08 01:06:24.755090 | orchestrator | 2026-01-08 01:06:24.755097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:06:24.755103 | orchestrator | 2026-01-08 01:06:24.755108 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:06:24.755113 | orchestrator | Thursday 08 January 2026 01:01:47 +0000 (0:00:00.312) 0:00:00.312 ****** 2026-01-08 01:06:24.755118 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:06:24.755124 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:06:24.755129 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:06:24.755134 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:06:24.755139 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:06:24.755144 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:06:24.755149 | orchestrator | 2026-01-08 01:06:24.755153 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:06:24.755158 | orchestrator | Thursday 08 January 2026 01:01:48 +0000 (0:00:00.774) 0:00:01.087 ****** 2026-01-08 01:06:24.755163 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-08 01:06:24.755169 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-08 01:06:24.755173 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-08 01:06:24.755178 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-08 01:06:24.755183 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-08 01:06:24.755188 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-08 01:06:24.755193 | orchestrator | 2026-01-08 01:06:24.755198 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-08 01:06:24.755203 | orchestrator | 2026-01-08 01:06:24.755208 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-08 01:06:24.755213 | orchestrator | Thursday 08 January 2026 01:01:49 +0000 (0:00:00.536) 0:00:01.624 ****** 2026-01-08 01:06:24.755218 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 01:06:24.755223 | orchestrator | 2026-01-08 01:06:24.755228 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-08 01:06:24.755249 | orchestrator | Thursday 08 January 2026 01:01:50 +0000 (0:00:01.133) 0:00:02.757 ****** 2026-01-08 01:06:24.755291 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:06:24.755297 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:06:24.755301 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:06:24.755306 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:06:24.755311 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:06:24.755315 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:06:24.755320 | orchestrator | 2026-01-08 01:06:24.755325 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-08 01:06:24.755330 | orchestrator | Thursday 08 January 2026 01:01:52 +0000 (0:00:01.959) 0:00:04.717 ****** 2026-01-08 01:06:24.755335 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:06:24.755339 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:06:24.755403 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:06:24.755411 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:06:24.755416 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:06:24.755421 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:06:24.755426 | orchestrator | 2026-01-08 01:06:24.755432 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-08 01:06:24.755460 | orchestrator | Thursday 08 January 2026 01:01:53 +0000 (0:00:01.159) 0:00:05.876 ****** 2026-01-08 01:06:24.755473 | orchestrator | ok: [testbed-node-0] => { 2026-01-08 01:06:24.755479 | orchestrator |  "changed": false, 2026-01-08 01:06:24.755483 | orchestrator |  "msg": "All assertions passed" 2026-01-08 01:06:24.755605 | orchestrator | } 2026-01-08 01:06:24.755614 | orchestrator | ok: [testbed-node-1] => { 2026-01-08 01:06:24.755620 | orchestrator |  "changed": false, 2026-01-08 01:06:24.755625 | orchestrator |  "msg": "All assertions passed" 2026-01-08 01:06:24.755630 | orchestrator | } 2026-01-08 01:06:24.755635 | orchestrator | ok: [testbed-node-2] => { 2026-01-08 01:06:24.755640 | orchestrator |  "changed": false, 2026-01-08 01:06:24.755644 | orchestrator |  "msg": "All assertions passed" 2026-01-08 01:06:24.755649 | orchestrator | } 2026-01-08 01:06:24.755653 | orchestrator | ok: [testbed-node-3] => { 2026-01-08 01:06:24.755658 | orchestrator |  "changed": false, 2026-01-08 01:06:24.755663 | orchestrator |  "msg": "All assertions passed" 2026-01-08 01:06:24.755668 | orchestrator | } 2026-01-08 01:06:24.755674 | orchestrator | ok: [testbed-node-4] => { 2026-01-08 01:06:24.755679 | orchestrator |  "changed": false, 2026-01-08 01:06:24.755684 | orchestrator |  "msg": "All assertions passed" 2026-01-08 01:06:24.755689 | orchestrator | } 2026-01-08 01:06:24.755694 | orchestrator | ok: [testbed-node-5] => { 2026-01-08 01:06:24.755699 | orchestrator |  "changed": false, 2026-01-08 01:06:24.755705 | orchestrator |  "msg": "All assertions passed" 2026-01-08 01:06:24.755710 | orchestrator | } 2026-01-08 01:06:24.755715 | orchestrator | 2026-01-08 01:06:24.755720 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-08 01:06:24.755726 | orchestrator | Thursday 08 January 2026 01:01:54 +0000 (0:00:00.946) 0:00:06.822 ****** 2026-01-08 01:06:24.755731 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.755735 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.755740 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.755745 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.755750 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.755755 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.755759 | orchestrator | 2026-01-08 01:06:24.755764 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting services] ************** 2026-01-08 01:06:24.755769 | orchestrator | Thursday 08 January 2026 01:01:55 +0000 (0:00:00.626) 0:00:07.449 ****** 2026-01-08 01:06:24.755774 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-08 01:06:24.755779 | orchestrator | 2026-01-08 01:06:24.755784 | orchestrator | TASK [service-ks-register : neutron | Creating/deleting endpoints] ************* 2026-01-08 01:06:24.755788 | orchestrator | Thursday 08 January 2026 01:01:58 +0000 (0:00:03.595) 0:00:11.044 ****** 2026-01-08 01:06:24.755801 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-08 01:06:24.755807 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-08 01:06:24.755812 | orchestrator | 2026-01-08 01:06:24.755826 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-08 01:06:24.755832 | orchestrator | Thursday 08 January 2026 01:02:05 +0000 (0:00:06.604) 0:00:17.648 ****** 2026-01-08 01:06:24.755836 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-08 01:06:24.755841 | orchestrator | 2026-01-08 01:06:24.755846 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-08 01:06:24.755892 | orchestrator | Thursday 08 January 2026 01:02:08 +0000 (0:00:03.331) 0:00:20.979 ****** 2026-01-08 01:06:24.755899 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:06:24.755905 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-08 01:06:24.755910 | orchestrator | 2026-01-08 01:06:24.755915 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-08 01:06:24.755920 | orchestrator | Thursday 08 January 2026 01:02:12 +0000 (0:00:03.941) 0:00:24.920 ****** 2026-01-08 01:06:24.755938 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:06:24.755945 | orchestrator | 2026-01-08 01:06:24.755949 | orchestrator | TASK [service-ks-register : neutron | Granting/revoking user roles] ************ 2026-01-08 01:06:24.755954 | orchestrator | Thursday 08 January 2026 01:02:16 +0000 (0:00:04.140) 0:00:29.061 ****** 2026-01-08 01:06:24.756124 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-08 01:06:24.756135 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-08 01:06:24.756141 | orchestrator | 2026-01-08 01:06:24.756147 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-08 01:06:24.756153 | orchestrator | Thursday 08 January 2026 01:02:24 +0000 (0:00:07.549) 0:00:36.610 ****** 2026-01-08 01:06:24.756159 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.756164 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.756170 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.756176 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.756183 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.756189 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.756196 | orchestrator | 2026-01-08 01:06:24.756202 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-08 01:06:24.756207 | orchestrator | Thursday 08 January 2026 01:02:25 +0000 (0:00:00.751) 0:00:37.362 ****** 2026-01-08 01:06:24.756214 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.756220 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.756226 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.756232 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.756239 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.756245 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.756251 | orchestrator | 2026-01-08 01:06:24.756257 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-08 01:06:24.756263 | orchestrator | Thursday 08 January 2026 01:02:27 +0000 (0:00:02.282) 0:00:39.645 ****** 2026-01-08 01:06:24.756269 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:06:24.756276 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:06:24.756282 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:06:24.756288 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:06:24.756294 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:06:24.756299 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:06:24.756305 | orchestrator | 2026-01-08 01:06:24.756312 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-08 01:06:24.756319 | orchestrator | Thursday 08 January 2026 01:02:29 +0000 (0:00:01.855) 0:00:41.500 ****** 2026-01-08 01:06:24.756332 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.756338 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.756351 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.756357 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.756364 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.756370 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.756376 | orchestrator | 2026-01-08 01:06:24.756383 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-08 01:06:24.756389 | orchestrator | Thursday 08 January 2026 01:02:32 +0000 (0:00:03.426) 0:00:44.927 ****** 2026-01-08 01:06:24.756397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.756434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.756443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.756454 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.756466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.756472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.756478 | orchestrator | 2026-01-08 01:06:24.756484 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-08 01:06:24.756490 | orchestrator | Thursday 08 January 2026 01:02:36 +0000 (0:00:03.971) 0:00:48.899 ****** 2026-01-08 01:06:24.756496 | orchestrator | [WARNING]: Skipped 2026-01-08 01:06:24.756502 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-08 01:06:24.756526 | orchestrator | due to this access issue: 2026-01-08 01:06:24.756533 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-08 01:06:24.756539 | orchestrator | a directory 2026-01-08 01:06:24.756545 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:06:24.756550 | orchestrator | 2026-01-08 01:06:24.756556 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-08 01:06:24.756562 | orchestrator | Thursday 08 January 2026 01:02:37 +0000 (0:00:00.889) 0:00:49.788 ****** 2026-01-08 01:06:24.756568 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 01:06:24.756575 | orchestrator | 2026-01-08 01:06:24.756581 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-08 01:06:24.756586 | orchestrator | Thursday 08 January 2026 01:02:38 +0000 (0:00:01.333) 0:00:51.121 ****** 2026-01-08 01:06:24.756592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.756605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.756611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.756637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.756644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.756650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.756659 | orchestrator | 2026-01-08 01:06:24.756664 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-08 01:06:24.756669 | orchestrator | Thursday 08 January 2026 01:02:43 +0000 (0:00:04.530) 0:00:55.651 ****** 2026-01-08 01:06:24.756676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.756681 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.756687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.756692 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.756713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.756719 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.756724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.756732 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.756739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.756744 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.756749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.756754 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.756759 | orchestrator | 2026-01-08 01:06:24.756764 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-08 01:06:24.756769 | orchestrator | Thursday 08 January 2026 01:02:47 +0000 (0:00:04.218) 0:00:59.870 ****** 2026-01-08 01:06:24.756790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.756795 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.756800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.756809 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.756814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.756819 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.756829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.756834 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.756839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.756844 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.756865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.756873 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.756878 | orchestrator | 2026-01-08 01:06:24.756883 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-08 01:06:24.756888 | orchestrator | Thursday 08 January 2026 01:02:51 +0000 (0:00:03.522) 0:01:03.393 ****** 2026-01-08 01:06:24.756893 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.756897 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757057 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757070 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757075 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757080 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757085 | orchestrator | 2026-01-08 01:06:24.757090 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-08 01:06:24.757096 | orchestrator | Thursday 08 January 2026 01:02:54 +0000 (0:00:03.461) 0:01:06.855 ****** 2026-01-08 01:06:24.757101 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757105 | orchestrator | 2026-01-08 01:06:24.757110 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-08 01:06:24.757115 | orchestrator | Thursday 08 January 2026 01:02:54 +0000 (0:00:00.406) 0:01:07.261 ****** 2026-01-08 01:06:24.757120 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757125 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.757130 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757135 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757140 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757145 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757150 | orchestrator | 2026-01-08 01:06:24.757154 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-08 01:06:24.757159 | orchestrator | Thursday 08 January 2026 01:02:56 +0000 (0:00:01.448) 0:01:08.710 ****** 2026-01-08 01:06:24.757170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.757176 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.757182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.757208 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.757218 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.757228 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.757240 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.757250 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757258 | orchestrator | 2026-01-08 01:06:24.757263 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-08 01:06:24.757267 | orchestrator | Thursday 08 January 2026 01:02:59 +0000 (0:00:03.476) 0:01:12.187 ****** 2026-01-08 01:06:24.757285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.757292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.757302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.757307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.757316 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.757325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.757330 | orchestrator | 2026-01-08 01:06:24.757336 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-08 01:06:24.757340 | orchestrator | Thursday 08 January 2026 01:03:05 +0000 (0:00:05.615) 0:01:17.803 ****** 2026-01-08 01:06:24.757346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.757353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.757358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.757372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.757378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.757383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.757388 | orchestrator | 2026-01-08 01:06:24.757394 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-08 01:06:24.757399 | orchestrator | Thursday 08 January 2026 01:03:12 +0000 (0:00:07.038) 0:01:24.841 ****** 2026-01-08 01:06:24.757407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.757416 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.757425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.757430 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.757442 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.757453 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.757469 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.757480 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757485 | orchestrator | 2026-01-08 01:06:24.757491 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-08 01:06:24.757496 | orchestrator | Thursday 08 January 2026 01:03:15 +0000 (0:00:03.187) 0:01:28.029 ****** 2026-01-08 01:06:24.757502 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757507 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:24.757513 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757519 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757524 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:06:24.757529 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:06:24.757535 | orchestrator | 2026-01-08 01:06:24.757540 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-08 01:06:24.757548 | orchestrator | Thursday 08 January 2026 01:03:19 +0000 (0:00:03.810) 0:01:31.839 ****** 2026-01-08 01:06:24.757553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.757558 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.757568 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.757587 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.757602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.757607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.757612 | orchestrator | 2026-01-08 01:06:24.757621 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-08 01:06:24.757627 | orchestrator | Thursday 08 January 2026 01:03:24 +0000 (0:00:04.557) 0:01:36.397 ****** 2026-01-08 01:06:24.757632 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.757637 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757642 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757646 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757651 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757659 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757664 | orchestrator | 2026-01-08 01:06:24.757669 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-08 01:06:24.757674 | orchestrator | Thursday 08 January 2026 01:03:29 +0000 (0:00:05.749) 0:01:42.146 ****** 2026-01-08 01:06:24.757679 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757683 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757689 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.757693 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757697 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757702 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757707 | orchestrator | 2026-01-08 01:06:24.757712 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-08 01:06:24.757717 | orchestrator | Thursday 08 January 2026 01:03:34 +0000 (0:00:04.253) 0:01:46.400 ****** 2026-01-08 01:06:24.757724 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757730 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757735 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757740 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.757745 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757750 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757755 | orchestrator | 2026-01-08 01:06:24.757760 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-08 01:06:24.757764 | orchestrator | Thursday 08 January 2026 01:03:36 +0000 (0:00:02.393) 0:01:48.793 ****** 2026-01-08 01:06:24.757769 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757774 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757779 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.757784 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757788 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757793 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757798 | orchestrator | 2026-01-08 01:06:24.757803 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-08 01:06:24.757808 | orchestrator | Thursday 08 January 2026 01:03:38 +0000 (0:00:02.500) 0:01:51.294 ****** 2026-01-08 01:06:24.757814 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757819 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.757824 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757829 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.757834 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.757839 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.757844 | orchestrator | 2026-01-08 01:06:24.757849 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-08 01:06:24.757853 | orchestrator | Thursday 08 January 2026 01:03:41 +0000 (0:00:02.205) 0:01:53.500 ****** 2026-01-08 01:06:24.757861 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-08 01:06:24.757963 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.757975 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-08 01:06:24.757980 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.757985 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-08 01:06:24.757990 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.757994 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-08 01:06:24.757999 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758005 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-08 01:06:24.758010 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758054 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-08 01:06:24.758061 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758071 | orchestrator | 2026-01-08 01:06:24.758075 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-08 01:06:24.758080 | orchestrator | Thursday 08 January 2026 01:03:45 +0000 (0:00:04.005) 0:01:57.506 ****** 2026-01-08 01:06:24.758085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.758092 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.758106 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.758116 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.758165 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.758175 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.758185 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758189 | orchestrator | 2026-01-08 01:06:24.758193 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-08 01:06:24.758198 | orchestrator | Thursday 08 January 2026 01:03:48 +0000 (0:00:02.964) 0:02:00.470 ****** 2026-01-08 01:06:24.758205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.758210 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.758223 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.758237 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.758246 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.758258 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.758267 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758272 | orchestrator | 2026-01-08 01:06:24.758276 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-08 01:06:24.758284 | orchestrator | Thursday 08 January 2026 01:03:49 +0000 (0:00:01.696) 0:02:02.166 ****** 2026-01-08 01:06:24.758289 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758293 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758298 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758302 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758306 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758311 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758315 | orchestrator | 2026-01-08 01:06:24.758320 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-08 01:06:24.758324 | orchestrator | Thursday 08 January 2026 01:03:51 +0000 (0:00:01.808) 0:02:03.974 ****** 2026-01-08 01:06:24.758329 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758333 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758338 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758342 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:06:24.758346 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:06:24.758351 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:06:24.758355 | orchestrator | 2026-01-08 01:06:24.758362 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-08 01:06:24.758367 | orchestrator | Thursday 08 January 2026 01:03:55 +0000 (0:00:03.825) 0:02:07.800 ****** 2026-01-08 01:06:24.758371 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758376 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758380 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758384 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758389 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758394 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758398 | orchestrator | 2026-01-08 01:06:24.758403 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-08 01:06:24.758407 | orchestrator | Thursday 08 January 2026 01:03:59 +0000 (0:00:04.170) 0:02:11.970 ****** 2026-01-08 01:06:24.758412 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758416 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758421 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758425 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758430 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758434 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758439 | orchestrator | 2026-01-08 01:06:24.758444 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-08 01:06:24.758449 | orchestrator | Thursday 08 January 2026 01:04:01 +0000 (0:00:02.318) 0:02:14.289 ****** 2026-01-08 01:06:24.758454 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758459 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758464 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758469 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758474 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758479 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758484 | orchestrator | 2026-01-08 01:06:24.758489 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-08 01:06:24.758494 | orchestrator | Thursday 08 January 2026 01:04:03 +0000 (0:00:02.031) 0:02:16.321 ****** 2026-01-08 01:06:24.758499 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758503 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758509 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758514 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758519 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758524 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758529 | orchestrator | 2026-01-08 01:06:24.758534 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-08 01:06:24.758538 | orchestrator | Thursday 08 January 2026 01:04:05 +0000 (0:00:01.945) 0:02:18.266 ****** 2026-01-08 01:06:24.758543 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758551 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758556 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758561 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758566 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758571 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758577 | orchestrator | 2026-01-08 01:06:24.758582 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-08 01:06:24.758586 | orchestrator | Thursday 08 January 2026 01:04:08 +0000 (0:00:02.587) 0:02:20.853 ****** 2026-01-08 01:06:24.758591 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758596 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758602 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758607 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758612 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758620 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758626 | orchestrator | 2026-01-08 01:06:24.758632 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-08 01:06:24.758637 | orchestrator | Thursday 08 January 2026 01:04:13 +0000 (0:00:05.028) 0:02:25.881 ****** 2026-01-08 01:06:24.758642 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758648 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758654 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758659 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758665 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758671 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758676 | orchestrator | 2026-01-08 01:06:24.758682 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-08 01:06:24.758687 | orchestrator | Thursday 08 January 2026 01:04:16 +0000 (0:00:03.372) 0:02:29.254 ****** 2026-01-08 01:06:24.758693 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-08 01:06:24.758699 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758705 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-08 01:06:24.758710 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758715 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-08 01:06:24.758720 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758726 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-08 01:06:24.758731 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758736 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-08 01:06:24.758741 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758746 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-08 01:06:24.758752 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758757 | orchestrator | 2026-01-08 01:06:24.758763 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-08 01:06:24.758768 | orchestrator | Thursday 08 January 2026 01:04:19 +0000 (0:00:02.746) 0:02:32.000 ****** 2026-01-08 01:06:24.758779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.758787 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.758790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.758794 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.758799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.758803 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.758806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.758809 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.758817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.758823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.758827 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.758831 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.758835 | orchestrator | 2026-01-08 01:06:24.758840 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-01-08 01:06:24.758845 | orchestrator | Thursday 08 January 2026 01:04:21 +0000 (0:00:02.278) 0:02:34.279 ****** 2026-01-08 01:06:24.758850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.758860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.758866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.758874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.758885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-08 01:06:24.758893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:24.758898 | orchestrator | 2026-01-08 01:06:24.758903 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-01-08 01:06:24.758908 | orchestrator | Thursday 08 January 2026 01:04:25 +0000 (0:00:03.187) 0:02:37.466 ****** 2026-01-08 01:06:24.758913 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:06:24.758918 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:24.758923 | orchestrator | } 2026-01-08 01:06:24.758963 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:06:24.758968 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:24.758974 | orchestrator | } 2026-01-08 01:06:24.758979 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:06:24.758984 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:24.758989 | orchestrator | } 2026-01-08 01:06:24.758995 | orchestrator | changed: [testbed-node-3] => { 2026-01-08 01:06:24.758999 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:24.759004 | orchestrator | } 2026-01-08 01:06:24.759009 | orchestrator | changed: [testbed-node-4] => { 2026-01-08 01:06:24.759014 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:24.759019 | orchestrator | } 2026-01-08 01:06:24.759023 | orchestrator | changed: [testbed-node-5] => { 2026-01-08 01:06:24.759028 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:24.759033 | orchestrator | } 2026-01-08 01:06:24.759037 | orchestrator | 2026-01-08 01:06:24.759042 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:06:24.759051 | orchestrator | Thursday 08 January 2026 01:04:26 +0000 (0:00:00.886) 0:02:38.352 ****** 2026-01-08 01:06:24.759063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.759069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.759074 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.759079 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.759087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2025.1', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:24.759092 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.759097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.759105 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.759110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.759116 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.759125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-08 01:06:24.759130 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.759135 | orchestrator | 2026-01-08 01:06:24.759140 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-08 01:06:24.759145 | orchestrator | Thursday 08 January 2026 01:04:28 +0000 (0:00:02.523) 0:02:40.876 ****** 2026-01-08 01:06:24.759150 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:24.759155 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:24.759159 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:24.759164 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:06:24.759169 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:06:24.759174 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:06:24.759179 | orchestrator | 2026-01-08 01:06:24.759185 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-08 01:06:24.759190 | orchestrator | Thursday 08 January 2026 01:04:29 +0000 (0:00:00.512) 0:02:41.388 ****** 2026-01-08 01:06:24.759196 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:24.759200 | orchestrator | 2026-01-08 01:06:24.759205 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-08 01:06:24.759210 | orchestrator | Thursday 08 January 2026 01:04:31 +0000 (0:00:02.166) 0:02:43.555 ****** 2026-01-08 01:06:24.759215 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:24.759220 | orchestrator | 2026-01-08 01:06:24.759225 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-08 01:06:24.759229 | orchestrator | Thursday 08 January 2026 01:04:33 +0000 (0:00:02.536) 0:02:46.091 ****** 2026-01-08 01:06:24.759234 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:24.759239 | orchestrator | 2026-01-08 01:06:24.759244 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-08 01:06:24.759249 | orchestrator | Thursday 08 January 2026 01:05:12 +0000 (0:00:39.018) 0:03:25.109 ****** 2026-01-08 01:06:24.759254 | orchestrator | 2026-01-08 01:06:24.759258 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-08 01:06:24.759263 | orchestrator | Thursday 08 January 2026 01:05:12 +0000 (0:00:00.065) 0:03:25.175 ****** 2026-01-08 01:06:24.759268 | orchestrator | 2026-01-08 01:06:24.759273 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-08 01:06:24.759280 | orchestrator | Thursday 08 January 2026 01:05:13 +0000 (0:00:00.244) 0:03:25.420 ****** 2026-01-08 01:06:24.759285 | orchestrator | 2026-01-08 01:06:24.759292 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-08 01:06:24.759297 | orchestrator | Thursday 08 January 2026 01:05:13 +0000 (0:00:00.063) 0:03:25.483 ****** 2026-01-08 01:06:24.759302 | orchestrator | 2026-01-08 01:06:24.759306 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-08 01:06:24.759311 | orchestrator | Thursday 08 January 2026 01:05:13 +0000 (0:00:00.071) 0:03:25.555 ****** 2026-01-08 01:06:24.759316 | orchestrator | 2026-01-08 01:06:24.759321 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-08 01:06:24.759326 | orchestrator | Thursday 08 January 2026 01:05:13 +0000 (0:00:00.065) 0:03:25.621 ****** 2026-01-08 01:06:24.759330 | orchestrator | 2026-01-08 01:06:24.759335 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-08 01:06:24.759340 | orchestrator | Thursday 08 January 2026 01:05:13 +0000 (0:00:00.065) 0:03:25.686 ****** 2026-01-08 01:06:24.759345 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:24.759349 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:06:24.759355 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:06:24.759359 | orchestrator | 2026-01-08 01:06:24.759364 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-08 01:06:24.759369 | orchestrator | Thursday 08 January 2026 01:05:33 +0000 (0:00:20.522) 0:03:46.208 ****** 2026-01-08 01:06:24.759374 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:06:24.759378 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:06:24.759383 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:06:24.759388 | orchestrator | 2026-01-08 01:06:24.759393 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:06:24.759398 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-08 01:06:24.759404 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-08 01:06:24.759409 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-08 01:06:24.759413 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-08 01:06:24.759423 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-08 01:06:24.759428 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-08 01:06:24.759433 | orchestrator | 2026-01-08 01:06:24.759439 | orchestrator | 2026-01-08 01:06:24.759445 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:06:24.759450 | orchestrator | Thursday 08 January 2026 01:06:21 +0000 (0:00:47.406) 0:04:33.615 ****** 2026-01-08 01:06:24.759454 | orchestrator | =============================================================================== 2026-01-08 01:06:24.759459 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 47.41s 2026-01-08 01:06:24.759463 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.02s 2026-01-08 01:06:24.759468 | orchestrator | neutron : Restart neutron-server container ----------------------------- 20.52s 2026-01-08 01:06:24.759473 | orchestrator | service-ks-register : neutron | Granting/revoking user roles ------------ 7.55s 2026-01-08 01:06:24.759478 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.04s 2026-01-08 01:06:24.759482 | orchestrator | service-ks-register : neutron | Creating/deleting endpoints ------------- 6.60s 2026-01-08 01:06:24.759490 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 5.75s 2026-01-08 01:06:24.759495 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.62s 2026-01-08 01:06:24.759500 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 5.03s 2026-01-08 01:06:24.759505 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.56s 2026-01-08 01:06:24.759510 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.53s 2026-01-08 01:06:24.759515 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 4.25s 2026-01-08 01:06:24.759521 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.22s 2026-01-08 01:06:24.759526 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.17s 2026-01-08 01:06:24.759531 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.14s 2026-01-08 01:06:24.759536 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 4.01s 2026-01-08 01:06:24.759541 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.97s 2026-01-08 01:06:24.759546 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.94s 2026-01-08 01:06:24.759552 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.83s 2026-01-08 01:06:24.759557 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.81s 2026-01-08 01:06:24.759562 | orchestrator | 2026-01-08 01:06:24 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:24.759571 | orchestrator | 2026-01-08 01:06:24 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:24.759576 | orchestrator | 2026-01-08 01:06:24 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:24.759582 | orchestrator | 2026-01-08 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:27.785037 | orchestrator | 2026-01-08 01:06:27 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:27.785518 | orchestrator | 2026-01-08 01:06:27 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:27.786332 | orchestrator | 2026-01-08 01:06:27 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:27.787184 | orchestrator | 2026-01-08 01:06:27 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:27.787207 | orchestrator | 2026-01-08 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:30.810430 | orchestrator | 2026-01-08 01:06:30 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:30.812309 | orchestrator | 2026-01-08 01:06:30 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:30.812373 | orchestrator | 2026-01-08 01:06:30 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:30.813106 | orchestrator | 2026-01-08 01:06:30 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:30.813146 | orchestrator | 2026-01-08 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:33.858236 | orchestrator | 2026-01-08 01:06:33 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:33.860552 | orchestrator | 2026-01-08 01:06:33 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:33.862713 | orchestrator | 2026-01-08 01:06:33 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:33.865435 | orchestrator | 2026-01-08 01:06:33 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:33.865507 | orchestrator | 2026-01-08 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:36.910996 | orchestrator | 2026-01-08 01:06:36 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:36.912666 | orchestrator | 2026-01-08 01:06:36 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:36.913463 | orchestrator | 2026-01-08 01:06:36 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:36.914167 | orchestrator | 2026-01-08 01:06:36 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:36.914198 | orchestrator | 2026-01-08 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:39.948448 | orchestrator | 2026-01-08 01:06:39 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:39.950167 | orchestrator | 2026-01-08 01:06:39 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:39.951395 | orchestrator | 2026-01-08 01:06:39 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:39.953314 | orchestrator | 2026-01-08 01:06:39 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:39.953374 | orchestrator | 2026-01-08 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:42.984413 | orchestrator | 2026-01-08 01:06:42 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:42.987465 | orchestrator | 2026-01-08 01:06:42 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:42.988012 | orchestrator | 2026-01-08 01:06:42 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:42.989331 | orchestrator | 2026-01-08 01:06:42 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:42.989369 | orchestrator | 2026-01-08 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:46.016392 | orchestrator | 2026-01-08 01:06:46 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:46.018178 | orchestrator | 2026-01-08 01:06:46 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:46.018220 | orchestrator | 2026-01-08 01:06:46 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:46.018232 | orchestrator | 2026-01-08 01:06:46 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:46.019506 | orchestrator | 2026-01-08 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:49.059658 | orchestrator | 2026-01-08 01:06:49 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:49.062240 | orchestrator | 2026-01-08 01:06:49 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:49.064457 | orchestrator | 2026-01-08 01:06:49 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:49.066135 | orchestrator | 2026-01-08 01:06:49 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:49.066262 | orchestrator | 2026-01-08 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:52.108761 | orchestrator | 2026-01-08 01:06:52 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:52.111020 | orchestrator | 2026-01-08 01:06:52 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state STARTED 2026-01-08 01:06:52.113249 | orchestrator | 2026-01-08 01:06:52 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:52.114386 | orchestrator | 2026-01-08 01:06:52 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:52.114682 | orchestrator | 2026-01-08 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:55.140040 | orchestrator | 2026-01-08 01:06:55 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:55.141860 | orchestrator | 2026-01-08 01:06:55 | INFO  | Task 45828640-99ae-4b58-b19f-89179e3af269 is in state SUCCESS 2026-01-08 01:06:55.143041 | orchestrator | 2026-01-08 01:06:55.143084 | orchestrator | 2026-01-08 01:06:55.143090 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:06:55.143095 | orchestrator | 2026-01-08 01:06:55.143099 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:06:55.143111 | orchestrator | Thursday 08 January 2026 01:05:04 +0000 (0:00:00.257) 0:00:00.257 ****** 2026-01-08 01:06:55.143125 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:06:55.143133 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:06:55.143139 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:06:55.143145 | orchestrator | 2026-01-08 01:06:55.143152 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:06:55.143158 | orchestrator | Thursday 08 January 2026 01:05:05 +0000 (0:00:00.303) 0:00:00.560 ****** 2026-01-08 01:06:55.143165 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-08 01:06:55.143172 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-08 01:06:55.143178 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-08 01:06:55.143182 | orchestrator | 2026-01-08 01:06:55.143185 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-08 01:06:55.143189 | orchestrator | 2026-01-08 01:06:55.143193 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-08 01:06:55.143197 | orchestrator | Thursday 08 January 2026 01:05:05 +0000 (0:00:00.495) 0:00:01.055 ****** 2026-01-08 01:06:55.143201 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:06:55.143205 | orchestrator | 2026-01-08 01:06:55.143209 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting services] *************** 2026-01-08 01:06:55.143213 | orchestrator | Thursday 08 January 2026 01:05:06 +0000 (0:00:00.562) 0:00:01.618 ****** 2026-01-08 01:06:55.143217 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-08 01:06:55.143221 | orchestrator | 2026-01-08 01:06:55.143225 | orchestrator | TASK [service-ks-register : magnum | Creating/deleting endpoints] ************** 2026-01-08 01:06:55.143228 | orchestrator | Thursday 08 January 2026 01:05:09 +0000 (0:00:03.182) 0:00:04.800 ****** 2026-01-08 01:06:55.143232 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-08 01:06:55.143236 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-08 01:06:55.143240 | orchestrator | 2026-01-08 01:06:55.143244 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-08 01:06:55.143247 | orchestrator | Thursday 08 January 2026 01:05:15 +0000 (0:00:06.174) 0:00:10.975 ****** 2026-01-08 01:06:55.143251 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-08 01:06:55.143255 | orchestrator | 2026-01-08 01:06:55.143259 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-08 01:06:55.143263 | orchestrator | Thursday 08 January 2026 01:05:18 +0000 (0:00:02.925) 0:00:13.901 ****** 2026-01-08 01:06:55.143266 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:06:55.143270 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-08 01:06:55.143274 | orchestrator | 2026-01-08 01:06:55.143278 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-08 01:06:55.143294 | orchestrator | Thursday 08 January 2026 01:05:22 +0000 (0:00:03.701) 0:00:17.603 ****** 2026-01-08 01:06:55.143299 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:06:55.143303 | orchestrator | 2026-01-08 01:06:55.143307 | orchestrator | TASK [service-ks-register : magnum | Granting/revoking user roles] ************* 2026-01-08 01:06:55.143317 | orchestrator | Thursday 08 January 2026 01:05:25 +0000 (0:00:03.643) 0:00:21.246 ****** 2026-01-08 01:06:55.143321 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-08 01:06:55.143324 | orchestrator | 2026-01-08 01:06:55.143332 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-08 01:06:55.143342 | orchestrator | Thursday 08 January 2026 01:05:29 +0000 (0:00:03.797) 0:00:25.043 ****** 2026-01-08 01:06:55.143350 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:55.143354 | orchestrator | 2026-01-08 01:06:55.143359 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-08 01:06:55.143363 | orchestrator | Thursday 08 January 2026 01:05:32 +0000 (0:00:03.057) 0:00:28.101 ****** 2026-01-08 01:06:55.143367 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:55.143371 | orchestrator | 2026-01-08 01:06:55.143374 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-08 01:06:55.143378 | orchestrator | Thursday 08 January 2026 01:05:36 +0000 (0:00:03.953) 0:00:32.054 ****** 2026-01-08 01:06:55.143382 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:55.143386 | orchestrator | 2026-01-08 01:06:55.143392 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-08 01:06:55.143397 | orchestrator | Thursday 08 January 2026 01:05:40 +0000 (0:00:03.580) 0:00:35.635 ****** 2026-01-08 01:06:55.143419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.143430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.143438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.143454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.143462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.143474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.143480 | orchestrator | 2026-01-08 01:06:55.143488 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-08 01:06:55.143495 | orchestrator | Thursday 08 January 2026 01:05:42 +0000 (0:00:01.814) 0:00:37.449 ****** 2026-01-08 01:06:55.143502 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:55.143508 | orchestrator | 2026-01-08 01:06:55.143514 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-08 01:06:55.143521 | orchestrator | Thursday 08 January 2026 01:05:42 +0000 (0:00:00.150) 0:00:37.600 ****** 2026-01-08 01:06:55.143532 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:55.143538 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:55.143545 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:55.143551 | orchestrator | 2026-01-08 01:06:55.143558 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-08 01:06:55.143564 | orchestrator | Thursday 08 January 2026 01:05:42 +0000 (0:00:00.607) 0:00:38.208 ****** 2026-01-08 01:06:55.143575 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:06:55.143582 | orchestrator | 2026-01-08 01:06:55.143588 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-08 01:06:55.143595 | orchestrator | Thursday 08 January 2026 01:05:43 +0000 (0:00:00.971) 0:00:39.179 ****** 2026-01-08 01:06:55.143602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.143613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.143623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.143630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.143641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.143648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.143655 | orchestrator | 2026-01-08 01:06:55.143665 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-08 01:06:55.143672 | orchestrator | Thursday 08 January 2026 01:05:46 +0000 (0:00:02.688) 0:00:41.867 ****** 2026-01-08 01:06:55.143679 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:06:55.143685 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:06:55.143692 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:06:55.143699 | orchestrator | 2026-01-08 01:06:55.143706 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-08 01:06:55.143713 | orchestrator | Thursday 08 January 2026 01:05:46 +0000 (0:00:00.311) 0:00:42.178 ****** 2026-01-08 01:06:55.143720 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:06:55.143726 | orchestrator | 2026-01-08 01:06:55.143731 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-08 01:06:55.143738 | orchestrator | Thursday 08 January 2026 01:05:47 +0000 (0:00:00.900) 0:00:43.079 ****** 2026-01-08 01:06:55.143749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.143756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.143767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.143778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.143786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.143796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.143806 | orchestrator | 2026-01-08 01:06:55.143813 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-08 01:06:55.143820 | orchestrator | Thursday 08 January 2026 01:05:49 +0000 (0:00:02.318) 0:00:45.397 ****** 2026-01-08 01:06:55.143827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.143833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.143840 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:55.143850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.143857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.143863 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:55.143874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.143887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.143894 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:55.143900 | orchestrator | 2026-01-08 01:06:55.143906 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-08 01:06:55.143912 | orchestrator | Thursday 08 January 2026 01:05:50 +0000 (0:00:00.839) 0:00:46.236 ****** 2026-01-08 01:06:55.143921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.143928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.143998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.144014 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:55.144026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.144033 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:55.144039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.144049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.144055 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:55.144062 | orchestrator | 2026-01-08 01:06:55.144067 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-08 01:06:55.144072 | orchestrator | Thursday 08 January 2026 01:05:52 +0000 (0:00:01.452) 0:00:47.689 ****** 2026-01-08 01:06:55.144083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.144097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.144105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.144114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.144121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.144189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.144198 | orchestrator | 2026-01-08 01:06:55.144204 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-08 01:06:55.144211 | orchestrator | Thursday 08 January 2026 01:05:54 +0000 (0:00:02.094) 0:00:49.784 ****** 2026-01-08 01:06:55.144218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.144225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.144235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.144249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.144256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.144263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.144270 | orchestrator | 2026-01-08 01:06:55.144276 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-08 01:06:55.144282 | orchestrator | Thursday 08 January 2026 01:05:59 +0000 (0:00:04.726) 0:00:54.510 ****** 2026-01-08 01:06:55.144292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.144299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.144306 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:55.144314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.144319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.144323 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:55.144327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.144333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.144344 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:55.144348 | orchestrator | 2026-01-08 01:06:55.144352 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-01-08 01:06:55.144356 | orchestrator | Thursday 08 January 2026 01:05:59 +0000 (0:00:00.907) 0:00:55.418 ****** 2026-01-08 01:06:55.144363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.144367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.144373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:06:55.144382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.144397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.144408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:06:55.144414 | orchestrator | 2026-01-08 01:06:55.144420 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-01-08 01:06:55.144426 | orchestrator | Thursday 08 January 2026 01:06:02 +0000 (0:00:02.405) 0:00:57.824 ****** 2026-01-08 01:06:55.144433 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:06:55.144439 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:55.144445 | orchestrator | } 2026-01-08 01:06:55.144451 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:06:55.144456 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:55.144461 | orchestrator | } 2026-01-08 01:06:55.144467 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:06:55.144473 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:06:55.144481 | orchestrator | } 2026-01-08 01:06:55.144487 | orchestrator | 2026-01-08 01:06:55.144494 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:06:55.144500 | orchestrator | Thursday 08 January 2026 01:06:02 +0000 (0:00:00.402) 0:00:58.226 ****** 2026-01-08 01:06:55.144507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.144517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.144528 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:55.144534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.144547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.144554 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:55.144561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2025.1', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:06:55.144568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:06:55.144581 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:55.144588 | orchestrator | 2026-01-08 01:06:55.144595 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-08 01:06:55.144602 | orchestrator | Thursday 08 January 2026 01:06:03 +0000 (0:00:00.796) 0:00:59.023 ****** 2026-01-08 01:06:55.144608 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:06:55.144612 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:06:55.144615 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:06:55.144619 | orchestrator | 2026-01-08 01:06:55.144625 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-08 01:06:55.144630 | orchestrator | Thursday 08 January 2026 01:06:04 +0000 (0:00:00.524) 0:00:59.547 ****** 2026-01-08 01:06:55.144633 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:55.144638 | orchestrator | 2026-01-08 01:06:55.144642 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-08 01:06:55.144645 | orchestrator | Thursday 08 January 2026 01:06:06 +0000 (0:00:02.321) 0:01:01.869 ****** 2026-01-08 01:06:55.144650 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:55.144653 | orchestrator | 2026-01-08 01:06:55.144657 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-08 01:06:55.144661 | orchestrator | Thursday 08 January 2026 01:06:08 +0000 (0:00:02.536) 0:01:04.405 ****** 2026-01-08 01:06:55.144665 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:55.144669 | orchestrator | 2026-01-08 01:06:55.144672 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-08 01:06:55.144676 | orchestrator | Thursday 08 January 2026 01:06:25 +0000 (0:00:16.748) 0:01:21.154 ****** 2026-01-08 01:06:55.144680 | orchestrator | 2026-01-08 01:06:55.144684 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-08 01:06:55.144687 | orchestrator | Thursday 08 January 2026 01:06:25 +0000 (0:00:00.124) 0:01:21.279 ****** 2026-01-08 01:06:55.144691 | orchestrator | 2026-01-08 01:06:55.144695 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-08 01:06:55.144699 | orchestrator | Thursday 08 January 2026 01:06:25 +0000 (0:00:00.122) 0:01:21.401 ****** 2026-01-08 01:06:55.144703 | orchestrator | 2026-01-08 01:06:55.144707 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-08 01:06:55.144710 | orchestrator | Thursday 08 January 2026 01:06:26 +0000 (0:00:00.108) 0:01:21.509 ****** 2026-01-08 01:06:55.144714 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:55.144718 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:06:55.144722 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:06:55.144726 | orchestrator | 2026-01-08 01:06:55.144729 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-08 01:06:55.144733 | orchestrator | Thursday 08 January 2026 01:06:38 +0000 (0:00:12.472) 0:01:33.982 ****** 2026-01-08 01:06:55.144737 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:06:55.144745 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:06:55.144749 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:06:55.144753 | orchestrator | 2026-01-08 01:06:55.144757 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:06:55.144761 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:06:55.144766 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 01:06:55.144770 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 01:06:55.144780 | orchestrator | 2026-01-08 01:06:55.144833 | orchestrator | 2026-01-08 01:06:55.144841 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:06:55.144845 | orchestrator | Thursday 08 January 2026 01:06:53 +0000 (0:00:15.360) 0:01:49.343 ****** 2026-01-08 01:06:55.144849 | orchestrator | =============================================================================== 2026-01-08 01:06:55.144854 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.75s 2026-01-08 01:06:55.144858 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.36s 2026-01-08 01:06:55.144862 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.47s 2026-01-08 01:06:55.144866 | orchestrator | service-ks-register : magnum | Creating/deleting endpoints -------------- 6.17s 2026-01-08 01:06:55.144870 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.73s 2026-01-08 01:06:55.144874 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.95s 2026-01-08 01:06:55.144878 | orchestrator | service-ks-register : magnum | Granting/revoking user roles ------------- 3.80s 2026-01-08 01:06:55.144882 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.70s 2026-01-08 01:06:55.144885 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.64s 2026-01-08 01:06:55.144889 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.58s 2026-01-08 01:06:55.144893 | orchestrator | service-ks-register : magnum | Creating/deleting services --------------- 3.18s 2026-01-08 01:06:55.144897 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.06s 2026-01-08 01:06:55.144901 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.93s 2026-01-08 01:06:55.144905 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.69s 2026-01-08 01:06:55.144909 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.54s 2026-01-08 01:06:55.144913 | orchestrator | service-check-containers : magnum | Check containers -------------------- 2.41s 2026-01-08 01:06:55.144917 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.32s 2026-01-08 01:06:55.144921 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.32s 2026-01-08 01:06:55.144925 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.09s 2026-01-08 01:06:55.144929 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.81s 2026-01-08 01:06:55.144936 | orchestrator | 2026-01-08 01:06:55 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:55.144941 | orchestrator | 2026-01-08 01:06:55 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:55.144945 | orchestrator | 2026-01-08 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:06:58.183802 | orchestrator | 2026-01-08 01:06:58 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:06:58.184340 | orchestrator | 2026-01-08 01:06:58 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:06:58.185410 | orchestrator | 2026-01-08 01:06:58 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:06:58.186223 | orchestrator | 2026-01-08 01:06:58 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:06:58.186358 | orchestrator | 2026-01-08 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:01.214977 | orchestrator | 2026-01-08 01:07:01 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:01.215575 | orchestrator | 2026-01-08 01:07:01 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:01.216184 | orchestrator | 2026-01-08 01:07:01 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:01.218160 | orchestrator | 2026-01-08 01:07:01 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:01.218202 | orchestrator | 2026-01-08 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:04.249807 | orchestrator | 2026-01-08 01:07:04 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:04.250730 | orchestrator | 2026-01-08 01:07:04 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:04.251762 | orchestrator | 2026-01-08 01:07:04 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:04.252748 | orchestrator | 2026-01-08 01:07:04 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:04.252779 | orchestrator | 2026-01-08 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:07.287697 | orchestrator | 2026-01-08 01:07:07 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:07.300780 | orchestrator | 2026-01-08 01:07:07 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:07.300848 | orchestrator | 2026-01-08 01:07:07 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:07.300862 | orchestrator | 2026-01-08 01:07:07 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:07.300873 | orchestrator | 2026-01-08 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:10.324029 | orchestrator | 2026-01-08 01:07:10 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:10.324730 | orchestrator | 2026-01-08 01:07:10 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:10.325664 | orchestrator | 2026-01-08 01:07:10 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:10.326575 | orchestrator | 2026-01-08 01:07:10 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:10.326784 | orchestrator | 2026-01-08 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:13.363690 | orchestrator | 2026-01-08 01:07:13 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:13.365426 | orchestrator | 2026-01-08 01:07:13 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:13.365925 | orchestrator | 2026-01-08 01:07:13 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:13.368239 | orchestrator | 2026-01-08 01:07:13 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:13.368361 | orchestrator | 2026-01-08 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:16.398444 | orchestrator | 2026-01-08 01:07:16 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:16.398820 | orchestrator | 2026-01-08 01:07:16 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:16.401369 | orchestrator | 2026-01-08 01:07:16 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:16.402883 | orchestrator | 2026-01-08 01:07:16 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:16.402932 | orchestrator | 2026-01-08 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:19.433245 | orchestrator | 2026-01-08 01:07:19 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:19.433901 | orchestrator | 2026-01-08 01:07:19 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:19.435578 | orchestrator | 2026-01-08 01:07:19 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:19.436137 | orchestrator | 2026-01-08 01:07:19 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:19.436180 | orchestrator | 2026-01-08 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:22.475470 | orchestrator | 2026-01-08 01:07:22 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:22.476691 | orchestrator | 2026-01-08 01:07:22 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:22.479008 | orchestrator | 2026-01-08 01:07:22 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:22.481709 | orchestrator | 2026-01-08 01:07:22 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:22.481750 | orchestrator | 2026-01-08 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:25.528224 | orchestrator | 2026-01-08 01:07:25 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:25.529801 | orchestrator | 2026-01-08 01:07:25 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:25.531662 | orchestrator | 2026-01-08 01:07:25 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:25.533765 | orchestrator | 2026-01-08 01:07:25 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:25.534070 | orchestrator | 2026-01-08 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:28.585619 | orchestrator | 2026-01-08 01:07:28 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:28.586117 | orchestrator | 2026-01-08 01:07:28 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:28.587126 | orchestrator | 2026-01-08 01:07:28 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:28.588242 | orchestrator | 2026-01-08 01:07:28 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:28.588272 | orchestrator | 2026-01-08 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:31.633454 | orchestrator | 2026-01-08 01:07:31 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:31.635223 | orchestrator | 2026-01-08 01:07:31 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:31.637714 | orchestrator | 2026-01-08 01:07:31 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:31.639351 | orchestrator | 2026-01-08 01:07:31 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:31.639385 | orchestrator | 2026-01-08 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:34.678661 | orchestrator | 2026-01-08 01:07:34 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:34.680243 | orchestrator | 2026-01-08 01:07:34 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:34.681130 | orchestrator | 2026-01-08 01:07:34 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:34.682130 | orchestrator | 2026-01-08 01:07:34 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:34.682290 | orchestrator | 2026-01-08 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:37.720956 | orchestrator | 2026-01-08 01:07:37 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:37.721693 | orchestrator | 2026-01-08 01:07:37 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:37.722587 | orchestrator | 2026-01-08 01:07:37 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:37.724397 | orchestrator | 2026-01-08 01:07:37 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:37.724429 | orchestrator | 2026-01-08 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:40.785250 | orchestrator | 2026-01-08 01:07:40 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:40.785709 | orchestrator | 2026-01-08 01:07:40 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:40.788446 | orchestrator | 2026-01-08 01:07:40 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:40.790120 | orchestrator | 2026-01-08 01:07:40 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:40.790166 | orchestrator | 2026-01-08 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:43.833054 | orchestrator | 2026-01-08 01:07:43 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:43.835884 | orchestrator | 2026-01-08 01:07:43 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:43.837538 | orchestrator | 2026-01-08 01:07:43 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:43.839296 | orchestrator | 2026-01-08 01:07:43 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:43.839350 | orchestrator | 2026-01-08 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:46.883450 | orchestrator | 2026-01-08 01:07:46 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:46.886210 | orchestrator | 2026-01-08 01:07:46 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:46.888450 | orchestrator | 2026-01-08 01:07:46 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:46.893432 | orchestrator | 2026-01-08 01:07:46 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:46.893491 | orchestrator | 2026-01-08 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:49.942266 | orchestrator | 2026-01-08 01:07:49 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:49.946249 | orchestrator | 2026-01-08 01:07:49 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:49.948877 | orchestrator | 2026-01-08 01:07:49 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:49.952470 | orchestrator | 2026-01-08 01:07:49 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:49.952561 | orchestrator | 2026-01-08 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:53.004395 | orchestrator | 2026-01-08 01:07:53 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:53.006966 | orchestrator | 2026-01-08 01:07:53 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:53.012320 | orchestrator | 2026-01-08 01:07:53 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:53.014898 | orchestrator | 2026-01-08 01:07:53 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:53.015097 | orchestrator | 2026-01-08 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:56.064928 | orchestrator | 2026-01-08 01:07:56 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:56.065015 | orchestrator | 2026-01-08 01:07:56 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:56.067212 | orchestrator | 2026-01-08 01:07:56 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:56.068652 | orchestrator | 2026-01-08 01:07:56 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:56.068693 | orchestrator | 2026-01-08 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:07:59.120278 | orchestrator | 2026-01-08 01:07:59 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:07:59.122258 | orchestrator | 2026-01-08 01:07:59 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:07:59.124178 | orchestrator | 2026-01-08 01:07:59 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:07:59.127308 | orchestrator | 2026-01-08 01:07:59 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:07:59.127368 | orchestrator | 2026-01-08 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:02.171179 | orchestrator | 2026-01-08 01:08:02 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:02.172082 | orchestrator | 2026-01-08 01:08:02 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:02.173251 | orchestrator | 2026-01-08 01:08:02 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:02.174427 | orchestrator | 2026-01-08 01:08:02 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:02.174607 | orchestrator | 2026-01-08 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:05.209310 | orchestrator | 2026-01-08 01:08:05 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:05.209719 | orchestrator | 2026-01-08 01:08:05 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:05.211600 | orchestrator | 2026-01-08 01:08:05 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:05.211631 | orchestrator | 2026-01-08 01:08:05 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:05.211637 | orchestrator | 2026-01-08 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:08.295238 | orchestrator | 2026-01-08 01:08:08 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:08.295288 | orchestrator | 2026-01-08 01:08:08 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:08.295293 | orchestrator | 2026-01-08 01:08:08 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:08.295296 | orchestrator | 2026-01-08 01:08:08 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:08.295300 | orchestrator | 2026-01-08 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:11.303116 | orchestrator | 2026-01-08 01:08:11 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:11.303788 | orchestrator | 2026-01-08 01:08:11 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:11.304752 | orchestrator | 2026-01-08 01:08:11 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:11.308300 | orchestrator | 2026-01-08 01:08:11 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:11.308363 | orchestrator | 2026-01-08 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:14.353701 | orchestrator | 2026-01-08 01:08:14 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:14.356583 | orchestrator | 2026-01-08 01:08:14 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:14.359349 | orchestrator | 2026-01-08 01:08:14 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:14.361561 | orchestrator | 2026-01-08 01:08:14 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:14.361617 | orchestrator | 2026-01-08 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:17.389091 | orchestrator | 2026-01-08 01:08:17 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:17.391235 | orchestrator | 2026-01-08 01:08:17 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:17.393518 | orchestrator | 2026-01-08 01:08:17 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:17.394957 | orchestrator | 2026-01-08 01:08:17 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:17.394986 | orchestrator | 2026-01-08 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:20.431560 | orchestrator | 2026-01-08 01:08:20 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:20.432081 | orchestrator | 2026-01-08 01:08:20 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:20.432871 | orchestrator | 2026-01-08 01:08:20 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:20.433651 | orchestrator | 2026-01-08 01:08:20 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:20.433726 | orchestrator | 2026-01-08 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:23.483448 | orchestrator | 2026-01-08 01:08:23 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:23.484745 | orchestrator | 2026-01-08 01:08:23 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:23.485306 | orchestrator | 2026-01-08 01:08:23 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:23.486197 | orchestrator | 2026-01-08 01:08:23 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:23.486289 | orchestrator | 2026-01-08 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:26.523777 | orchestrator | 2026-01-08 01:08:26 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:26.525666 | orchestrator | 2026-01-08 01:08:26 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:26.528175 | orchestrator | 2026-01-08 01:08:26 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:26.529670 | orchestrator | 2026-01-08 01:08:26 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:26.529716 | orchestrator | 2026-01-08 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:29.573477 | orchestrator | 2026-01-08 01:08:29 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:29.574413 | orchestrator | 2026-01-08 01:08:29 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state STARTED 2026-01-08 01:08:29.575403 | orchestrator | 2026-01-08 01:08:29 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:29.579725 | orchestrator | 2026-01-08 01:08:29 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:29.579793 | orchestrator | 2026-01-08 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:32.621091 | orchestrator | 2026-01-08 01:08:32 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:32.623252 | orchestrator | 2026-01-08 01:08:32 | INFO  | Task ad48f9ea-bfdc-4032-9ff8-f99f61db2801 is in state SUCCESS 2026-01-08 01:08:32.624448 | orchestrator | 2026-01-08 01:08:32.624477 | orchestrator | 2026-01-08 01:08:32.624482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:08:32.624487 | orchestrator | 2026-01-08 01:08:32.624492 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:08:32.624496 | orchestrator | Thursday 08 January 2026 01:07:00 +0000 (0:00:00.454) 0:00:00.454 ****** 2026-01-08 01:08:32.624500 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:08:32.624505 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:08:32.624509 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:08:32.624513 | orchestrator | 2026-01-08 01:08:32.624517 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:08:32.624521 | orchestrator | Thursday 08 January 2026 01:07:00 +0000 (0:00:00.284) 0:00:00.738 ****** 2026-01-08 01:08:32.624525 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-08 01:08:32.624529 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-08 01:08:32.624533 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-08 01:08:32.624537 | orchestrator | 2026-01-08 01:08:32.624541 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-08 01:08:32.624545 | orchestrator | 2026-01-08 01:08:32.624549 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-08 01:08:32.624553 | orchestrator | Thursday 08 January 2026 01:07:00 +0000 (0:00:00.380) 0:00:01.118 ****** 2026-01-08 01:08:32.624557 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:08:32.624562 | orchestrator | 2026-01-08 01:08:32.624566 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-08 01:08:32.624570 | orchestrator | Thursday 08 January 2026 01:07:01 +0000 (0:00:00.704) 0:00:01.822 ****** 2026-01-08 01:08:32.624575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624608 | orchestrator | 2026-01-08 01:08:32.624612 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-08 01:08:32.624616 | orchestrator | Thursday 08 January 2026 01:07:02 +0000 (0:00:00.794) 0:00:02.617 ****** 2026-01-08 01:08:32.624620 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:08:32.624625 | orchestrator | 2026-01-08 01:08:32.624629 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-08 01:08:32.624633 | orchestrator | Thursday 08 January 2026 01:07:03 +0000 (0:00:00.895) 0:00:03.512 ****** 2026-01-08 01:08:32.624637 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:08:32.624641 | orchestrator | 2026-01-08 01:08:32.624645 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-08 01:08:32.624657 | orchestrator | Thursday 08 January 2026 01:07:04 +0000 (0:00:00.827) 0:00:04.340 ****** 2026-01-08 01:08:32.624662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624679 | orchestrator | 2026-01-08 01:08:32.624683 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-08 01:08:32.624687 | orchestrator | Thursday 08 January 2026 01:07:05 +0000 (0:00:01.629) 0:00:05.969 ****** 2026-01-08 01:08:32.624691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:08:32.624696 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:32.624700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:08:32.624704 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:32.624711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:08:32.624715 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:32.624719 | orchestrator | 2026-01-08 01:08:32.624723 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-08 01:08:32.624727 | orchestrator | Thursday 08 January 2026 01:07:06 +0000 (0:00:00.678) 0:00:06.648 ****** 2026-01-08 01:08:32.624732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:08:32.624739 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:32.624744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:08:32.624749 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:32.624753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:08:32.624757 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:32.624761 | orchestrator | 2026-01-08 01:08:32.624765 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-08 01:08:32.624769 | orchestrator | Thursday 08 January 2026 01:07:07 +0000 (0:00:00.917) 0:00:07.566 ****** 2026-01-08 01:08:32.624775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624791 | orchestrator | 2026-01-08 01:08:32.624795 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-08 01:08:32.624799 | orchestrator | Thursday 08 January 2026 01:07:08 +0000 (0:00:01.367) 0:00:08.934 ****** 2026-01-08 01:08:32.624805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624818 | orchestrator | 2026-01-08 01:08:32.624822 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-08 01:08:32.624828 | orchestrator | Thursday 08 January 2026 01:07:10 +0000 (0:00:01.429) 0:00:10.363 ****** 2026-01-08 01:08:32.624832 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:32.624836 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:32.624840 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:32.624844 | orchestrator | 2026-01-08 01:08:32.624848 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-08 01:08:32.624852 | orchestrator | Thursday 08 January 2026 01:07:10 +0000 (0:00:00.480) 0:00:10.843 ****** 2026-01-08 01:08:32.624856 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-08 01:08:32.624860 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-08 01:08:32.624867 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-08 01:08:32.624871 | orchestrator | 2026-01-08 01:08:32.624875 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-08 01:08:32.624879 | orchestrator | Thursday 08 January 2026 01:07:11 +0000 (0:00:01.436) 0:00:12.280 ****** 2026-01-08 01:08:32.624883 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-08 01:08:32.624887 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-08 01:08:32.624891 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-08 01:08:32.624895 | orchestrator | 2026-01-08 01:08:32.624899 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-01-08 01:08:32.624903 | orchestrator | Thursday 08 January 2026 01:07:13 +0000 (0:00:01.523) 0:00:13.804 ****** 2026-01-08 01:08:32.624907 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:08:32.624911 | orchestrator | 2026-01-08 01:08:32.624915 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-01-08 01:08:32.624919 | orchestrator | Thursday 08 January 2026 01:07:14 +0000 (0:00:00.991) 0:00:14.795 ****** 2026-01-08 01:08:32.624923 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:08:32.624927 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:08:32.624931 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:08:32.624935 | orchestrator | 2026-01-08 01:08:32.624938 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-08 01:08:32.624942 | orchestrator | Thursday 08 January 2026 01:07:15 +0000 (0:00:01.096) 0:00:15.892 ****** 2026-01-08 01:08:32.624946 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:32.624950 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:08:32.624954 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:08:32.624958 | orchestrator | 2026-01-08 01:08:32.624962 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-01-08 01:08:32.624966 | orchestrator | Thursday 08 January 2026 01:07:17 +0000 (0:00:01.567) 0:00:17.459 ****** 2026-01-08 01:08:32.624972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:08:32.624992 | orchestrator | 2026-01-08 01:08:32.624996 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-01-08 01:08:32.625045 | orchestrator | Thursday 08 January 2026 01:07:18 +0000 (0:00:01.058) 0:00:18.518 ****** 2026-01-08 01:08:32.625050 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:08:32.625053 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:08:32.625057 | orchestrator | } 2026-01-08 01:08:32.625062 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:08:32.625067 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:08:32.625071 | orchestrator | } 2026-01-08 01:08:32.625076 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:08:32.625080 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:08:32.625085 | orchestrator | } 2026-01-08 01:08:32.625089 | orchestrator | 2026-01-08 01:08:32.625093 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:08:32.625098 | orchestrator | Thursday 08 January 2026 01:07:18 +0000 (0:00:00.532) 0:00:19.051 ****** 2026-01-08 01:08:32.625103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:08:32.625107 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:32.625114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:08:32.625119 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:32.625124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2025.1', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:08:32.625131 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:32.625136 | orchestrator | 2026-01-08 01:08:32.625140 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-08 01:08:32.625145 | orchestrator | Thursday 08 January 2026 01:07:20 +0000 (0:00:01.408) 0:00:20.460 ****** 2026-01-08 01:08:32.625149 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:32.625153 | orchestrator | 2026-01-08 01:08:32.625157 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-08 01:08:32.625162 | orchestrator | Thursday 08 January 2026 01:07:22 +0000 (0:00:02.572) 0:00:23.032 ****** 2026-01-08 01:08:32.625166 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:32.625170 | orchestrator | 2026-01-08 01:08:32.625175 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-08 01:08:32.625179 | orchestrator | Thursday 08 January 2026 01:07:24 +0000 (0:00:02.225) 0:00:25.258 ****** 2026-01-08 01:08:32.625184 | orchestrator | 2026-01-08 01:08:32.625188 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-08 01:08:32.625193 | orchestrator | Thursday 08 January 2026 01:07:25 +0000 (0:00:00.073) 0:00:25.331 ****** 2026-01-08 01:08:32.625197 | orchestrator | 2026-01-08 01:08:32.625202 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-08 01:08:32.625210 | orchestrator | Thursday 08 January 2026 01:07:25 +0000 (0:00:00.065) 0:00:25.397 ****** 2026-01-08 01:08:32.625217 | orchestrator | 2026-01-08 01:08:32.625223 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-08 01:08:32.625230 | orchestrator | Thursday 08 January 2026 01:07:25 +0000 (0:00:00.075) 0:00:25.472 ****** 2026-01-08 01:08:32.625237 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:32.625244 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:32.625251 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:32.625255 | orchestrator | 2026-01-08 01:08:32.625260 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-08 01:08:32.625265 | orchestrator | Thursday 08 January 2026 01:07:27 +0000 (0:00:02.161) 0:00:27.634 ****** 2026-01-08 01:08:32.625269 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:32.625274 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:32.625278 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-08 01:08:32.625282 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-08 01:08:32.625287 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-08 01:08:32.625291 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:08:32.625296 | orchestrator | 2026-01-08 01:08:32.625302 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-08 01:08:32.625308 | orchestrator | Thursday 08 January 2026 01:08:05 +0000 (0:00:37.923) 0:01:05.557 ****** 2026-01-08 01:08:32.625382 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:32.625390 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:08:32.625395 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:08:32.625399 | orchestrator | 2026-01-08 01:08:32.625404 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-08 01:08:32.625408 | orchestrator | Thursday 08 January 2026 01:08:27 +0000 (0:00:21.982) 0:01:27.540 ****** 2026-01-08 01:08:32.625413 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:08:32.625418 | orchestrator | 2026-01-08 01:08:32.625422 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-08 01:08:32.625427 | orchestrator | Thursday 08 January 2026 01:08:29 +0000 (0:00:02.044) 0:01:29.584 ****** 2026-01-08 01:08:32.625431 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:32.625436 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:32.625440 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:32.625445 | orchestrator | 2026-01-08 01:08:32.625449 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-08 01:08:32.625464 | orchestrator | Thursday 08 January 2026 01:08:29 +0000 (0:00:00.327) 0:01:29.911 ****** 2026-01-08 01:08:32.625475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-08 01:08:32.625483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-08 01:08:32.625490 | orchestrator | 2026-01-08 01:08:32.625495 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-08 01:08:32.625499 | orchestrator | Thursday 08 January 2026 01:08:31 +0000 (0:00:02.333) 0:01:32.245 ****** 2026-01-08 01:08:32.625503 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:32.625506 | orchestrator | 2026-01-08 01:08:32.625510 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:08:32.625515 | orchestrator | testbed-node-0 : ok=22  changed=13  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:08:32.625519 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:08:32.625523 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-08 01:08:32.625527 | orchestrator | 2026-01-08 01:08:32.625531 | orchestrator | 2026-01-08 01:08:32.625535 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:08:32.625539 | orchestrator | Thursday 08 January 2026 01:08:32 +0000 (0:00:00.270) 0:01:32.516 ****** 2026-01-08 01:08:32.625542 | orchestrator | =============================================================================== 2026-01-08 01:08:32.625547 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 37.92s 2026-01-08 01:08:32.625553 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 21.98s 2026-01-08 01:08:32.625560 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.57s 2026-01-08 01:08:32.625567 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.33s 2026-01-08 01:08:32.625573 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.23s 2026-01-08 01:08:32.625580 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.16s 2026-01-08 01:08:32.625585 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.04s 2026-01-08 01:08:32.625593 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.63s 2026-01-08 01:08:32.625597 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 1.57s 2026-01-08 01:08:32.625601 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.52s 2026-01-08 01:08:32.625604 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.44s 2026-01-08 01:08:32.625608 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.43s 2026-01-08 01:08:32.625612 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.41s 2026-01-08 01:08:32.625616 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2026-01-08 01:08:32.625622 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 1.10s 2026-01-08 01:08:32.625628 | orchestrator | service-check-containers : grafana | Check containers ------------------- 1.06s 2026-01-08 01:08:32.625635 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 0.99s 2026-01-08 01:08:32.625646 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.92s 2026-01-08 01:08:32.625653 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.90s 2026-01-08 01:08:32.625659 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.83s 2026-01-08 01:08:32.625666 | orchestrator | 2026-01-08 01:08:32 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:32.626930 | orchestrator | 2026-01-08 01:08:32 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:32.626962 | orchestrator | 2026-01-08 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:35.667490 | orchestrator | 2026-01-08 01:08:35 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:35.667558 | orchestrator | 2026-01-08 01:08:35 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:08:35.668883 | orchestrator | 2026-01-08 01:08:35 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state STARTED 2026-01-08 01:08:35.669563 | orchestrator | 2026-01-08 01:08:35 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:35.669599 | orchestrator | 2026-01-08 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:38.706667 | orchestrator | 2026-01-08 01:08:38 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:38.707838 | orchestrator | 2026-01-08 01:08:38 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:08:38.711914 | orchestrator | 2026-01-08 01:08:38 | INFO  | Task 3c7d5469-d176-4579-b15b-bb46bf57ce04 is in state SUCCESS 2026-01-08 01:08:38.711976 | orchestrator | 2026-01-08 01:08:38 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:38.711985 | orchestrator | 2026-01-08 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:38.714292 | orchestrator | 2026-01-08 01:08:38.714347 | orchestrator | 2026-01-08 01:08:38.714355 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:08:38.714361 | orchestrator | 2026-01-08 01:08:38.714367 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:08:38.714373 | orchestrator | Thursday 08 January 2026 01:05:53 +0000 (0:00:00.265) 0:00:00.265 ****** 2026-01-08 01:08:38.714377 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:08:38.714383 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:08:38.714388 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:08:38.714393 | orchestrator | 2026-01-08 01:08:38.714399 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:08:38.714404 | orchestrator | Thursday 08 January 2026 01:05:53 +0000 (0:00:00.289) 0:00:00.555 ****** 2026-01-08 01:08:38.714409 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-08 01:08:38.714415 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-08 01:08:38.714421 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-08 01:08:38.714425 | orchestrator | 2026-01-08 01:08:38.714428 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-08 01:08:38.714431 | orchestrator | 2026-01-08 01:08:38.714435 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-08 01:08:38.714441 | orchestrator | Thursday 08 January 2026 01:05:53 +0000 (0:00:00.461) 0:00:01.017 ****** 2026-01-08 01:08:38.714457 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:08:38.714463 | orchestrator | 2026-01-08 01:08:38.714468 | orchestrator | TASK [service-ks-register : glance | Creating/deleting services] *************** 2026-01-08 01:08:38.714473 | orchestrator | Thursday 08 January 2026 01:05:54 +0000 (0:00:00.574) 0:00:01.591 ****** 2026-01-08 01:08:38.714492 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-08 01:08:38.714498 | orchestrator | 2026-01-08 01:08:38.714503 | orchestrator | TASK [service-ks-register : glance | Creating/deleting endpoints] ************** 2026-01-08 01:08:38.714508 | orchestrator | Thursday 08 January 2026 01:05:57 +0000 (0:00:03.111) 0:00:04.703 ****** 2026-01-08 01:08:38.714513 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-08 01:08:38.714518 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-08 01:08:38.714523 | orchestrator | 2026-01-08 01:08:38.714527 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-08 01:08:38.714533 | orchestrator | Thursday 08 January 2026 01:06:04 +0000 (0:00:06.726) 0:00:11.430 ****** 2026-01-08 01:08:38.714538 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-08 01:08:38.714608 | orchestrator | 2026-01-08 01:08:38.714613 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-08 01:08:38.714616 | orchestrator | Thursday 08 January 2026 01:06:07 +0000 (0:00:03.396) 0:00:14.826 ****** 2026-01-08 01:08:38.714729 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:08:38.714741 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-08 01:08:38.714746 | orchestrator | 2026-01-08 01:08:38.714750 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-08 01:08:38.714755 | orchestrator | Thursday 08 January 2026 01:06:12 +0000 (0:00:04.397) 0:00:19.223 ****** 2026-01-08 01:08:38.714760 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:08:38.714764 | orchestrator | 2026-01-08 01:08:38.714769 | orchestrator | TASK [service-ks-register : glance | Granting/revoking user roles] ************* 2026-01-08 01:08:38.714774 | orchestrator | Thursday 08 January 2026 01:06:15 +0000 (0:00:03.791) 0:00:23.015 ****** 2026-01-08 01:08:38.714779 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-08 01:08:38.714783 | orchestrator | 2026-01-08 01:08:38.714789 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-08 01:08:38.714793 | orchestrator | Thursday 08 January 2026 01:06:20 +0000 (0:00:04.075) 0:00:27.091 ****** 2026-01-08 01:08:38.714824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.714837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.714843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.714847 | orchestrator | 2026-01-08 01:08:38.714850 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-08 01:08:38.714853 | orchestrator | Thursday 08 January 2026 01:06:23 +0000 (0:00:03.405) 0:00:30.496 ****** 2026-01-08 01:08:38.714862 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:08:38.714865 | orchestrator | 2026-01-08 01:08:38.714869 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-08 01:08:38.714874 | orchestrator | Thursday 08 January 2026 01:06:24 +0000 (0:00:00.724) 0:00:31.221 ****** 2026-01-08 01:08:38.714877 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:08:38.714881 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:38.714884 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:08:38.714887 | orchestrator | 2026-01-08 01:08:38.714890 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-08 01:08:38.714893 | orchestrator | Thursday 08 January 2026 01:06:28 +0000 (0:00:04.036) 0:00:35.258 ****** 2026-01-08 01:08:38.714897 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-08 01:08:38.714901 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-08 01:08:38.714904 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-08 01:08:38.714907 | orchestrator | 2026-01-08 01:08:38.714910 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-08 01:08:38.714913 | orchestrator | Thursday 08 January 2026 01:06:29 +0000 (0:00:01.663) 0:00:36.921 ****** 2026-01-08 01:08:38.714916 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-08 01:08:38.714919 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-08 01:08:38.714923 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-01-08 01:08:38.714926 | orchestrator | 2026-01-08 01:08:38.714929 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-08 01:08:38.714932 | orchestrator | Thursday 08 January 2026 01:06:31 +0000 (0:00:01.139) 0:00:38.061 ****** 2026-01-08 01:08:38.714938 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:08:38.714943 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:08:38.714948 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:08:38.714953 | orchestrator | 2026-01-08 01:08:38.714957 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-08 01:08:38.714962 | orchestrator | Thursday 08 January 2026 01:06:31 +0000 (0:00:00.672) 0:00:38.734 ****** 2026-01-08 01:08:38.714968 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.714972 | orchestrator | 2026-01-08 01:08:38.714978 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-08 01:08:38.714983 | orchestrator | Thursday 08 January 2026 01:06:31 +0000 (0:00:00.226) 0:00:38.961 ****** 2026-01-08 01:08:38.714988 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.714993 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.714999 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715017 | orchestrator | 2026-01-08 01:08:38.715021 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-08 01:08:38.715024 | orchestrator | Thursday 08 January 2026 01:06:32 +0000 (0:00:00.269) 0:00:39.231 ****** 2026-01-08 01:08:38.715027 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:08:38.715031 | orchestrator | 2026-01-08 01:08:38.715034 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-08 01:08:38.715037 | orchestrator | Thursday 08 January 2026 01:06:32 +0000 (0:00:00.438) 0:00:39.669 ****** 2026-01-08 01:08:38.715047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.715054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.715060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.715066 | orchestrator | 2026-01-08 01:08:38.715069 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-08 01:08:38.715072 | orchestrator | Thursday 08 January 2026 01:06:36 +0000 (0:00:03.836) 0:00:43.505 ****** 2026-01-08 01:08:38.715078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 01:08:38.715082 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 01:08:38.715091 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 01:08:38.715103 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715106 | orchestrator | 2026-01-08 01:08:38.715109 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-08 01:08:38.715112 | orchestrator | Thursday 08 January 2026 01:06:40 +0000 (0:00:03.743) 0:00:47.249 ****** 2026-01-08 01:08:38.715116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 01:08:38.715121 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 01:08:38.715135 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 01:08:38.715146 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715157 | orchestrator | 2026-01-08 01:08:38.715163 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-08 01:08:38.715167 | orchestrator | Thursday 08 January 2026 01:06:44 +0000 (0:00:04.635) 0:00:51.884 ****** 2026-01-08 01:08:38.715170 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715173 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715176 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715179 | orchestrator | 2026-01-08 01:08:38.715182 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-08 01:08:38.715186 | orchestrator | Thursday 08 January 2026 01:06:47 +0000 (0:00:02.764) 0:00:54.649 ****** 2026-01-08 01:08:38.715193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.715197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.715206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.715210 | orchestrator | 2026-01-08 01:08:38.715215 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-08 01:08:38.715218 | orchestrator | Thursday 08 January 2026 01:06:50 +0000 (0:00:03.404) 0:00:58.053 ****** 2026-01-08 01:08:38.715221 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:38.715224 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:08:38.715227 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:08:38.715230 | orchestrator | 2026-01-08 01:08:38.715234 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-08 01:08:38.715237 | orchestrator | Thursday 08 January 2026 01:06:56 +0000 (0:00:05.012) 0:01:03.066 ****** 2026-01-08 01:08:38.715240 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715243 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715246 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715249 | orchestrator | 2026-01-08 01:08:38.715253 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-08 01:08:38.715256 | orchestrator | Thursday 08 January 2026 01:07:00 +0000 (0:00:04.419) 0:01:07.485 ****** 2026-01-08 01:08:38.715259 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715262 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715265 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715268 | orchestrator | 2026-01-08 01:08:38.715271 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-08 01:08:38.715274 | orchestrator | Thursday 08 January 2026 01:07:04 +0000 (0:00:03.756) 0:01:11.242 ****** 2026-01-08 01:08:38.715277 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715281 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715284 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715287 | orchestrator | 2026-01-08 01:08:38.715290 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-08 01:08:38.715293 | orchestrator | Thursday 08 January 2026 01:07:08 +0000 (0:00:04.556) 0:01:15.799 ****** 2026-01-08 01:08:38.715296 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715300 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715305 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715308 | orchestrator | 2026-01-08 01:08:38.715311 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-08 01:08:38.715315 | orchestrator | Thursday 08 January 2026 01:07:09 +0000 (0:00:00.304) 0:01:16.103 ****** 2026-01-08 01:08:38.715318 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-08 01:08:38.715321 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715324 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-08 01:08:38.715328 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715332 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-08 01:08:38.715337 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715342 | orchestrator | 2026-01-08 01:08:38.715346 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-08 01:08:38.715352 | orchestrator | Thursday 08 January 2026 01:07:12 +0000 (0:00:03.260) 0:01:19.364 ****** 2026-01-08 01:08:38.715358 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:08:38.715363 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:08:38.715368 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:38.715373 | orchestrator | 2026-01-08 01:08:38.715377 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-01-08 01:08:38.715380 | orchestrator | Thursday 08 January 2026 01:07:17 +0000 (0:00:04.901) 0:01:24.265 ****** 2026-01-08 01:08:38.715388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.715393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.715401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-08 01:08:38.715405 | orchestrator | 2026-01-08 01:08:38.715409 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-01-08 01:08:38.715413 | orchestrator | Thursday 08 January 2026 01:07:21 +0000 (0:00:04.714) 0:01:28.980 ****** 2026-01-08 01:08:38.715416 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:08:38.715422 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:08:38.715426 | orchestrator | } 2026-01-08 01:08:38.715430 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:08:38.715434 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:08:38.715438 | orchestrator | } 2026-01-08 01:08:38.715441 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:08:38.715445 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:08:38.715449 | orchestrator | } 2026-01-08 01:08:38.715453 | orchestrator | 2026-01-08 01:08:38.715456 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:08:38.715462 | orchestrator | Thursday 08 January 2026 01:07:22 +0000 (0:00:00.344) 0:01:29.325 ****** 2026-01-08 01:08:38.715466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 01:08:38.715472 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 01:08:38.715480 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2025.1', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-08 01:08:38.715495 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715499 | orchestrator | 2026-01-08 01:08:38.715502 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-08 01:08:38.715506 | orchestrator | Thursday 08 January 2026 01:07:26 +0000 (0:00:03.747) 0:01:33.072 ****** 2026-01-08 01:08:38.715510 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:08:38.715514 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:08:38.715517 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:08:38.715521 | orchestrator | 2026-01-08 01:08:38.715525 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-08 01:08:38.715528 | orchestrator | Thursday 08 January 2026 01:07:26 +0000 (0:00:00.526) 0:01:33.598 ****** 2026-01-08 01:08:38.715532 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:38.715536 | orchestrator | 2026-01-08 01:08:38.715540 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-08 01:08:38.715544 | orchestrator | Thursday 08 January 2026 01:07:28 +0000 (0:00:02.066) 0:01:35.665 ****** 2026-01-08 01:08:38.715547 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:38.715551 | orchestrator | 2026-01-08 01:08:38.715555 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-08 01:08:38.715558 | orchestrator | Thursday 08 January 2026 01:07:30 +0000 (0:00:02.053) 0:01:37.718 ****** 2026-01-08 01:08:38.715563 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:38.715566 | orchestrator | 2026-01-08 01:08:38.715570 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-08 01:08:38.715575 | orchestrator | Thursday 08 January 2026 01:07:32 +0000 (0:00:01.969) 0:01:39.687 ****** 2026-01-08 01:08:38.715580 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:38.715588 | orchestrator | 2026-01-08 01:08:38.715594 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-08 01:08:38.715599 | orchestrator | Thursday 08 January 2026 01:08:01 +0000 (0:00:29.160) 0:02:08.848 ****** 2026-01-08 01:08:38.715604 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:38.715609 | orchestrator | 2026-01-08 01:08:38.715614 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-08 01:08:38.715619 | orchestrator | Thursday 08 January 2026 01:08:03 +0000 (0:00:01.981) 0:02:10.829 ****** 2026-01-08 01:08:38.715624 | orchestrator | 2026-01-08 01:08:38.715630 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-08 01:08:38.715635 | orchestrator | Thursday 08 January 2026 01:08:03 +0000 (0:00:00.062) 0:02:10.891 ****** 2026-01-08 01:08:38.715640 | orchestrator | 2026-01-08 01:08:38.715645 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-08 01:08:38.715650 | orchestrator | Thursday 08 January 2026 01:08:03 +0000 (0:00:00.066) 0:02:10.957 ****** 2026-01-08 01:08:38.715655 | orchestrator | 2026-01-08 01:08:38.715661 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-08 01:08:38.715671 | orchestrator | Thursday 08 January 2026 01:08:03 +0000 (0:00:00.068) 0:02:11.026 ****** 2026-01-08 01:08:38.715676 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:08:38.715683 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:08:38.715686 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:08:38.715690 | orchestrator | 2026-01-08 01:08:38.715694 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:08:38.715698 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-08 01:08:38.715703 | orchestrator | testbed-node-1 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-08 01:08:38.715709 | orchestrator | testbed-node-2 : ok=17  changed=11  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-08 01:08:38.715713 | orchestrator | 2026-01-08 01:08:38.715717 | orchestrator | 2026-01-08 01:08:38.715720 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:08:38.715724 | orchestrator | Thursday 08 January 2026 01:08:34 +0000 (0:00:30.970) 0:02:41.996 ****** 2026-01-08 01:08:38.715731 | orchestrator | =============================================================================== 2026-01-08 01:08:38.715735 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.97s 2026-01-08 01:08:38.715739 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.16s 2026-01-08 01:08:38.715742 | orchestrator | service-ks-register : glance | Creating/deleting endpoints -------------- 6.73s 2026-01-08 01:08:38.715746 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.01s 2026-01-08 01:08:38.715750 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.90s 2026-01-08 01:08:38.715754 | orchestrator | service-check-containers : glance | Check containers -------------------- 4.72s 2026-01-08 01:08:38.715758 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.64s 2026-01-08 01:08:38.715762 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.56s 2026-01-08 01:08:38.715766 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.42s 2026-01-08 01:08:38.715769 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.40s 2026-01-08 01:08:38.715773 | orchestrator | service-ks-register : glance | Granting/revoking user roles ------------- 4.08s 2026-01-08 01:08:38.715778 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.04s 2026-01-08 01:08:38.715782 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.84s 2026-01-08 01:08:38.715786 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.79s 2026-01-08 01:08:38.715789 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.76s 2026-01-08 01:08:38.715792 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.75s 2026-01-08 01:08:38.715796 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.74s 2026-01-08 01:08:38.715801 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.41s 2026-01-08 01:08:38.715806 | orchestrator | glance : Copying over config.json files for services -------------------- 3.40s 2026-01-08 01:08:38.715811 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.40s 2026-01-08 01:08:41.755518 | orchestrator | 2026-01-08 01:08:41 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:41.756975 | orchestrator | 2026-01-08 01:08:41 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:08:41.760297 | orchestrator | 2026-01-08 01:08:41 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:41.761504 | orchestrator | 2026-01-08 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:44.803958 | orchestrator | 2026-01-08 01:08:44 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:44.806301 | orchestrator | 2026-01-08 01:08:44 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:08:44.809796 | orchestrator | 2026-01-08 01:08:44 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:44.809870 | orchestrator | 2026-01-08 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:47.855298 | orchestrator | 2026-01-08 01:08:47 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:47.858575 | orchestrator | 2026-01-08 01:08:47 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:08:47.860466 | orchestrator | 2026-01-08 01:08:47 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:47.860500 | orchestrator | 2026-01-08 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:50.909228 | orchestrator | 2026-01-08 01:08:50 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:50.911055 | orchestrator | 2026-01-08 01:08:50 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:08:50.913482 | orchestrator | 2026-01-08 01:08:50 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:50.913524 | orchestrator | 2026-01-08 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:53.960824 | orchestrator | 2026-01-08 01:08:53 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:53.961469 | orchestrator | 2026-01-08 01:08:53 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:08:53.962486 | orchestrator | 2026-01-08 01:08:53 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:53.962511 | orchestrator | 2026-01-08 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:08:57.019410 | orchestrator | 2026-01-08 01:08:57 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:08:57.019470 | orchestrator | 2026-01-08 01:08:57 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:08:57.019479 | orchestrator | 2026-01-08 01:08:57 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:08:57.019487 | orchestrator | 2026-01-08 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:00.074530 | orchestrator | 2026-01-08 01:09:00 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:00.076646 | orchestrator | 2026-01-08 01:09:00 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:00.078787 | orchestrator | 2026-01-08 01:09:00 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:09:00.078827 | orchestrator | 2026-01-08 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:03.130367 | orchestrator | 2026-01-08 01:09:03 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:03.132720 | orchestrator | 2026-01-08 01:09:03 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:03.134357 | orchestrator | 2026-01-08 01:09:03 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state STARTED 2026-01-08 01:09:03.134410 | orchestrator | 2026-01-08 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:06.184500 | orchestrator | 2026-01-08 01:09:06 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:06.185717 | orchestrator | 2026-01-08 01:09:06 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:06.188760 | orchestrator | 2026-01-08 01:09:06 | INFO  | Task 2f9991fa-3c56-4e4d-9f18-bc3cda6ce9cc is in state SUCCESS 2026-01-08 01:09:06.190415 | orchestrator | 2026-01-08 01:09:06.190478 | orchestrator | 2026-01-08 01:09:06.190485 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:09:06.190489 | orchestrator | 2026-01-08 01:09:06.190492 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:09:06.190496 | orchestrator | Thursday 08 January 2026 01:06:13 +0000 (0:00:00.258) 0:00:00.258 ****** 2026-01-08 01:09:06.190499 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:09:06.190503 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:09:06.190506 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:09:06.190510 | orchestrator | 2026-01-08 01:09:06.190513 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:09:06.190516 | orchestrator | Thursday 08 January 2026 01:06:13 +0000 (0:00:00.296) 0:00:00.555 ****** 2026-01-08 01:09:06.190519 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-08 01:09:06.190523 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-08 01:09:06.190526 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-08 01:09:06.190529 | orchestrator | 2026-01-08 01:09:06.190532 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-08 01:09:06.190536 | orchestrator | 2026-01-08 01:09:06.190539 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-08 01:09:06.190542 | orchestrator | Thursday 08 January 2026 01:06:13 +0000 (0:00:00.442) 0:00:00.997 ****** 2026-01-08 01:09:06.190545 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:09:06.190549 | orchestrator | 2026-01-08 01:09:06.190552 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting services] *************** 2026-01-08 01:09:06.190556 | orchestrator | Thursday 08 January 2026 01:06:14 +0000 (0:00:00.564) 0:00:01.561 ****** 2026-01-08 01:09:06.190559 | orchestrator | changed: [testbed-node-0] => (item=cinder (block-storage)) 2026-01-08 01:09:06.190562 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-08 01:09:06.190566 | orchestrator | 2026-01-08 01:09:06.190569 | orchestrator | TASK [service-ks-register : cinder | Creating/deleting endpoints] ************** 2026-01-08 01:09:06.190572 | orchestrator | Thursday 08 January 2026 01:06:21 +0000 (0:00:07.256) 0:00:08.818 ****** 2026-01-08 01:09:06.190575 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api-int.testbed.osism.xyz:8776/v3 -> internal) 2026-01-08 01:09:06.190578 | orchestrator | changed: [testbed-node-0] => (item=cinder -> https://api.testbed.osism.xyz:8776/v3 -> public) 2026-01-08 01:09:06.190582 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-08 01:09:06.190585 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-08 01:09:06.190588 | orchestrator | 2026-01-08 01:09:06.190592 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-08 01:09:06.190595 | orchestrator | Thursday 08 January 2026 01:06:34 +0000 (0:00:13.121) 0:00:21.939 ****** 2026-01-08 01:09:06.190598 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-08 01:09:06.190609 | orchestrator | 2026-01-08 01:09:06.190612 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-08 01:09:06.190615 | orchestrator | Thursday 08 January 2026 01:06:38 +0000 (0:00:03.762) 0:00:25.702 ****** 2026-01-08 01:09:06.190618 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:09:06.190622 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-08 01:09:06.190635 | orchestrator | 2026-01-08 01:09:06.190641 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-08 01:09:06.190647 | orchestrator | Thursday 08 January 2026 01:06:42 +0000 (0:00:04.172) 0:00:29.875 ****** 2026-01-08 01:09:06.190653 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:09:06.190659 | orchestrator | 2026-01-08 01:09:06.190663 | orchestrator | TASK [service-ks-register : cinder | Granting/revoking user roles] ************* 2026-01-08 01:09:06.190667 | orchestrator | Thursday 08 January 2026 01:06:46 +0000 (0:00:03.356) 0:00:33.232 ****** 2026-01-08 01:09:06.190670 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-08 01:09:06.190673 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-08 01:09:06.190676 | orchestrator | 2026-01-08 01:09:06.190679 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-08 01:09:06.190684 | orchestrator | Thursday 08 January 2026 01:06:52 +0000 (0:00:06.420) 0:00:39.652 ****** 2026-01-08 01:09:06.190701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.190710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.190719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.190733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.190740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.190744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.190754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.190760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.190764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.190776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.190782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.190787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.190792 | orchestrator | 2026-01-08 01:09:06.190800 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-08 01:09:06.190806 | orchestrator | Thursday 08 January 2026 01:06:54 +0000 (0:00:02.326) 0:00:41.979 ****** 2026-01-08 01:09:06.190811 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.190817 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:09:06.190822 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:09:06.190827 | orchestrator | 2026-01-08 01:09:06.190832 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-08 01:09:06.190837 | orchestrator | Thursday 08 January 2026 01:06:55 +0000 (0:00:00.266) 0:00:42.245 ****** 2026-01-08 01:09:06.190842 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:09:06.190845 | orchestrator | 2026-01-08 01:09:06.190848 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-08 01:09:06.190852 | orchestrator | Thursday 08 January 2026 01:06:55 +0000 (0:00:00.638) 0:00:42.884 ****** 2026-01-08 01:09:06.190855 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-08 01:09:06.190858 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-08 01:09:06.190861 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-08 01:09:06.190864 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-08 01:09:06.190868 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-08 01:09:06.190907 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-08 01:09:06.190910 | orchestrator | 2026-01-08 01:09:06.190914 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-08 01:09:06.190920 | orchestrator | Thursday 08 January 2026 01:06:58 +0000 (0:00:02.422) 0:00:45.306 ****** 2026-01-08 01:09:06.190926 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-08 01:09:06.190930 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-08 01:09:06.190937 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-08 01:09:06.190941 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-08 01:09:06.190944 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-08 01:09:06.190952 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-08 01:09:06.190955 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-08 01:09:06.190960 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-08 01:09:06.190964 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-08 01:09:06.190971 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-08 01:09:06.190975 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-01-08 01:09:06.190980 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-01-08 01:09:06.190984 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-08 01:09:06.190994 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-08 01:09:06.191000 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-08 01:09:06.191003 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-08 01:09:06.191009 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-08 01:09:06.191012 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-08 01:09:06.191089 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-08 01:09:06.191111 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-08 01:09:06.191285 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-01-08 01:09:06.191300 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-08 01:09:06.191304 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-08 01:09:06.191312 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-01-08 01:09:06.191315 | orchestrator | 2026-01-08 01:09:06.191318 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-08 01:09:06.191322 | orchestrator | Thursday 08 January 2026 01:07:04 +0000 (0:00:06.429) 0:00:51.735 ****** 2026-01-08 01:09:06.191328 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-08 01:09:06.191332 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-08 01:09:06.191335 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-08 01:09:06.191338 | orchestrator | 2026-01-08 01:09:06.191341 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-08 01:09:06.191344 | orchestrator | Thursday 08 January 2026 01:07:07 +0000 (0:00:02.699) 0:00:54.434 ****** 2026-01-08 01:09:06.191348 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-08 01:09:06.191351 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-08 01:09:06.191354 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-01-08 01:09:06.191357 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-01-08 01:09:06.191360 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-01-08 01:09:06.191363 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-01-08 01:09:06.191366 | orchestrator | 2026-01-08 01:09:06.191370 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-08 01:09:06.191373 | orchestrator | Thursday 08 January 2026 01:07:10 +0000 (0:00:03.012) 0:00:57.447 ****** 2026-01-08 01:09:06.191376 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-08 01:09:06.191382 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-08 01:09:06.191385 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-08 01:09:06.191390 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-08 01:09:06.191393 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-08 01:09:06.191397 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-08 01:09:06.191400 | orchestrator | 2026-01-08 01:09:06.191403 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-08 01:09:06.191406 | orchestrator | Thursday 08 January 2026 01:07:11 +0000 (0:00:01.301) 0:00:58.748 ****** 2026-01-08 01:09:06.191409 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.191413 | orchestrator | 2026-01-08 01:09:06.191416 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-08 01:09:06.191419 | orchestrator | Thursday 08 January 2026 01:07:11 +0000 (0:00:00.151) 0:00:58.899 ****** 2026-01-08 01:09:06.191422 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.191425 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:09:06.191428 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:09:06.191432 | orchestrator | 2026-01-08 01:09:06.191435 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-08 01:09:06.191438 | orchestrator | Thursday 08 January 2026 01:07:12 +0000 (0:00:00.378) 0:00:59.278 ****** 2026-01-08 01:09:06.191441 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:09:06.191444 | orchestrator | 2026-01-08 01:09:06.191447 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-08 01:09:06.191451 | orchestrator | Thursday 08 January 2026 01:07:13 +0000 (0:00:00.844) 0:01:00.122 ****** 2026-01-08 01:09:06.191454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.191460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.191466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.191474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.191480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.191486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.191494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.191499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.191507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.191515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.191520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.191525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.191530 | orchestrator | 2026-01-08 01:09:06.191535 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-08 01:09:06.191540 | orchestrator | Thursday 08 January 2026 01:07:17 +0000 (0:00:04.453) 0:01:04.576 ****** 2026-01-08 01:09:06.191549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.191558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191577 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.191580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.191585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191599 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:09:06.191603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.191606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191622 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:09:06.191625 | orchestrator | 2026-01-08 01:09:06.191628 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-08 01:09:06.191631 | orchestrator | Thursday 08 January 2026 01:07:18 +0000 (0:00:00.948) 0:01:05.524 ****** 2026-01-08 01:09:06.191637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.191640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191713 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.191943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.191949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191964 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:09:06.191970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.191978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.191991 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:09:06.191995 | orchestrator | 2026-01-08 01:09:06.191998 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-08 01:09:06.192001 | orchestrator | Thursday 08 January 2026 01:07:20 +0000 (0:00:01.798) 0:01:07.323 ****** 2026-01-08 01:09:06.192005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.192010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.192034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.192041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192095 | orchestrator | 2026-01-08 01:09:06.192101 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-08 01:09:06.192111 | orchestrator | Thursday 08 January 2026 01:07:24 +0000 (0:00:04.679) 0:01:12.003 ****** 2026-01-08 01:09:06.192114 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-01-08 01:09:06.192119 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.192122 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-01-08 01:09:06.192125 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:09:06.192129 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-01-08 01:09:06.192132 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:09:06.192135 | orchestrator | 2026-01-08 01:09:06.192138 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-01-08 01:09:06.192141 | orchestrator | Thursday 08 January 2026 01:07:25 +0000 (0:00:01.000) 0:01:13.004 ****** 2026-01-08 01:09:06.192146 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:09:06.192150 | orchestrator | 2026-01-08 01:09:06.192153 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-01-08 01:09:06.192156 | orchestrator | Thursday 08 January 2026 01:07:27 +0000 (0:00:01.381) 0:01:14.385 ****** 2026-01-08 01:09:06.192159 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:09:06.192162 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:09:06.192165 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:09:06.192169 | orchestrator | 2026-01-08 01:09:06.192172 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-08 01:09:06.192175 | orchestrator | Thursday 08 January 2026 01:07:29 +0000 (0:00:01.727) 0:01:16.113 ****** 2026-01-08 01:09:06.192178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.192186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.192192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.192203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192268 | orchestrator | 2026-01-08 01:09:06.192272 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-08 01:09:06.192275 | orchestrator | Thursday 08 January 2026 01:07:39 +0000 (0:00:10.563) 0:01:26.677 ****** 2026-01-08 01:09:06.192278 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:09:06.192281 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:09:06.192285 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:09:06.192288 | orchestrator | 2026-01-08 01:09:06.192291 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-08 01:09:06.192294 | orchestrator | Thursday 08 January 2026 01:07:41 +0000 (0:00:01.746) 0:01:28.424 ****** 2026-01-08 01:09:06.192300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.192306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192323 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.192328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.192336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192356 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:09:06.192363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.192369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192394 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:09:06.192399 | orchestrator | 2026-01-08 01:09:06.192405 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-08 01:09:06.192410 | orchestrator | Thursday 08 January 2026 01:07:42 +0000 (0:00:00.675) 0:01:29.100 ****** 2026-01-08 01:09:06.192415 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.192420 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:09:06.192425 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:09:06.192428 | orchestrator | 2026-01-08 01:09:06.192432 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-01-08 01:09:06.192437 | orchestrator | Thursday 08 January 2026 01:07:42 +0000 (0:00:00.336) 0:01:29.436 ****** 2026-01-08 01:09:06.192445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.192451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.192460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:09:06.192471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-08 01:09:06.192515 | orchestrator | 2026-01-08 01:09:06.192518 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-01-08 01:09:06.192522 | orchestrator | Thursday 08 January 2026 01:07:45 +0000 (0:00:03.104) 0:01:32.540 ****** 2026-01-08 01:09:06.192525 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:09:06.192528 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:09:06.192532 | orchestrator | } 2026-01-08 01:09:06.192535 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:09:06.192538 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:09:06.192541 | orchestrator | } 2026-01-08 01:09:06.192544 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:09:06.192548 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:09:06.192551 | orchestrator | } 2026-01-08 01:09:06.192555 | orchestrator | 2026-01-08 01:09:06.192558 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:09:06.192564 | orchestrator | Thursday 08 January 2026 01:07:46 +0000 (0:00:00.552) 0:01:33.093 ****** 2026-01-08 01:09:06.192569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.192575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192587 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.192593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.192599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192614 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:09:06.192618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2025.1', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:09:06.192624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2025.1', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2025.1', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2025.1', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-08 01:09:06.192645 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:09:06.192648 | orchestrator | 2026-01-08 01:09:06.192654 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-08 01:09:06.192658 | orchestrator | Thursday 08 January 2026 01:07:46 +0000 (0:00:00.890) 0:01:33.983 ****** 2026-01-08 01:09:06.192662 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.192666 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:09:06.192669 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:09:06.192673 | orchestrator | 2026-01-08 01:09:06.192677 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-08 01:09:06.192681 | orchestrator | Thursday 08 January 2026 01:07:47 +0000 (0:00:00.323) 0:01:34.307 ****** 2026-01-08 01:09:06.192684 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:09:06.192688 | orchestrator | 2026-01-08 01:09:06.192692 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-08 01:09:06.192695 | orchestrator | Thursday 08 January 2026 01:07:49 +0000 (0:00:02.041) 0:01:36.349 ****** 2026-01-08 01:09:06.192699 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:09:06.192703 | orchestrator | 2026-01-08 01:09:06.192707 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-08 01:09:06.192711 | orchestrator | Thursday 08 January 2026 01:07:51 +0000 (0:00:02.483) 0:01:38.833 ****** 2026-01-08 01:09:06.192715 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:09:06.192718 | orchestrator | 2026-01-08 01:09:06.192722 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-08 01:09:06.192726 | orchestrator | Thursday 08 January 2026 01:08:08 +0000 (0:00:16.302) 0:01:55.135 ****** 2026-01-08 01:09:06.192729 | orchestrator | 2026-01-08 01:09:06.192733 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-08 01:09:06.192737 | orchestrator | Thursday 08 January 2026 01:08:08 +0000 (0:00:00.107) 0:01:55.243 ****** 2026-01-08 01:09:06.192741 | orchestrator | 2026-01-08 01:09:06.192745 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-08 01:09:06.192749 | orchestrator | Thursday 08 January 2026 01:08:08 +0000 (0:00:00.128) 0:01:55.371 ****** 2026-01-08 01:09:06.192753 | orchestrator | 2026-01-08 01:09:06.192756 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-08 01:09:06.192760 | orchestrator | Thursday 08 January 2026 01:08:08 +0000 (0:00:00.117) 0:01:55.489 ****** 2026-01-08 01:09:06.192764 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:09:06.192768 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:09:06.192774 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:09:06.192778 | orchestrator | 2026-01-08 01:09:06.192782 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-08 01:09:06.192786 | orchestrator | Thursday 08 January 2026 01:08:28 +0000 (0:00:20.149) 0:02:15.639 ****** 2026-01-08 01:09:06.192790 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:09:06.192794 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:09:06.192797 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:09:06.192801 | orchestrator | 2026-01-08 01:09:06.192805 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-08 01:09:06.192809 | orchestrator | Thursday 08 January 2026 01:08:34 +0000 (0:00:05.982) 0:02:21.621 ****** 2026-01-08 01:09:06.192813 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:09:06.192817 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:09:06.192820 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:09:06.192826 | orchestrator | 2026-01-08 01:09:06.192834 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-08 01:09:06.192839 | orchestrator | Thursday 08 January 2026 01:08:53 +0000 (0:00:18.981) 0:02:40.602 ****** 2026-01-08 01:09:06.192844 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:09:06.192848 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:09:06.192857 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:09:06.192863 | orchestrator | 2026-01-08 01:09:06.192868 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-08 01:09:06.192874 | orchestrator | Thursday 08 January 2026 01:09:04 +0000 (0:00:11.054) 0:02:51.657 ****** 2026-01-08 01:09:06.192879 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:09:06.192883 | orchestrator | 2026-01-08 01:09:06.192888 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:09:06.192894 | orchestrator | testbed-node-0 : ok=32  changed=23  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-08 01:09:06.192899 | orchestrator | testbed-node-1 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 01:09:06.192905 | orchestrator | testbed-node-2 : ok=23  changed=16  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 01:09:06.192910 | orchestrator | 2026-01-08 01:09:06.192915 | orchestrator | 2026-01-08 01:09:06.192921 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:09:06.192926 | orchestrator | Thursday 08 January 2026 01:09:04 +0000 (0:00:00.257) 0:02:51.914 ****** 2026-01-08 01:09:06.192932 | orchestrator | =============================================================================== 2026-01-08 01:09:06.192937 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 20.15s 2026-01-08 01:09:06.192943 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 18.98s 2026-01-08 01:09:06.192949 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 16.30s 2026-01-08 01:09:06.192955 | orchestrator | service-ks-register : cinder | Creating/deleting endpoints ------------- 13.12s 2026-01-08 01:09:06.192958 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.05s 2026-01-08 01:09:06.192962 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.56s 2026-01-08 01:09:06.192965 | orchestrator | service-ks-register : cinder | Creating/deleting services --------------- 7.26s 2026-01-08 01:09:06.192968 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.43s 2026-01-08 01:09:06.192975 | orchestrator | service-ks-register : cinder | Granting/revoking user roles ------------- 6.42s 2026-01-08 01:09:06.192980 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.98s 2026-01-08 01:09:06.192987 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.68s 2026-01-08 01:09:06.192994 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.45s 2026-01-08 01:09:06.193002 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.17s 2026-01-08 01:09:06.193008 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.76s 2026-01-08 01:09:06.193013 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.36s 2026-01-08 01:09:06.193032 | orchestrator | service-check-containers : cinder | Check containers -------------------- 3.10s 2026-01-08 01:09:06.193037 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.01s 2026-01-08 01:09:06.193042 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.70s 2026-01-08 01:09:06.193046 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.48s 2026-01-08 01:09:06.193051 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.42s 2026-01-08 01:09:06.193056 | orchestrator | 2026-01-08 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:09.241563 | orchestrator | 2026-01-08 01:09:09 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:09.243073 | orchestrator | 2026-01-08 01:09:09 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:09.243120 | orchestrator | 2026-01-08 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:12.291219 | orchestrator | 2026-01-08 01:09:12 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:12.292961 | orchestrator | 2026-01-08 01:09:12 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:12.293002 | orchestrator | 2026-01-08 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:15.341792 | orchestrator | 2026-01-08 01:09:15 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:15.343751 | orchestrator | 2026-01-08 01:09:15 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:15.343802 | orchestrator | 2026-01-08 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:18.393737 | orchestrator | 2026-01-08 01:09:18 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:18.396395 | orchestrator | 2026-01-08 01:09:18 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:18.396451 | orchestrator | 2026-01-08 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:21.443064 | orchestrator | 2026-01-08 01:09:21 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:21.445127 | orchestrator | 2026-01-08 01:09:21 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:21.445405 | orchestrator | 2026-01-08 01:09:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:24.491780 | orchestrator | 2026-01-08 01:09:24 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:24.494639 | orchestrator | 2026-01-08 01:09:24 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:24.494687 | orchestrator | 2026-01-08 01:09:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:27.547891 | orchestrator | 2026-01-08 01:09:27 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:27.549797 | orchestrator | 2026-01-08 01:09:27 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:27.549854 | orchestrator | 2026-01-08 01:09:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:30.598070 | orchestrator | 2026-01-08 01:09:30 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:30.600446 | orchestrator | 2026-01-08 01:09:30 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:30.600494 | orchestrator | 2026-01-08 01:09:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:33.645306 | orchestrator | 2026-01-08 01:09:33 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:33.646976 | orchestrator | 2026-01-08 01:09:33 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:33.647014 | orchestrator | 2026-01-08 01:09:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:36.688277 | orchestrator | 2026-01-08 01:09:36 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:36.689906 | orchestrator | 2026-01-08 01:09:36 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:36.689951 | orchestrator | 2026-01-08 01:09:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:39.734734 | orchestrator | 2026-01-08 01:09:39 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:39.734814 | orchestrator | 2026-01-08 01:09:39 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:39.734825 | orchestrator | 2026-01-08 01:09:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:42.779632 | orchestrator | 2026-01-08 01:09:42 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:42.781948 | orchestrator | 2026-01-08 01:09:42 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:42.782010 | orchestrator | 2026-01-08 01:09:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:45.823245 | orchestrator | 2026-01-08 01:09:45 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:45.825101 | orchestrator | 2026-01-08 01:09:45 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:45.825150 | orchestrator | 2026-01-08 01:09:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:48.873342 | orchestrator | 2026-01-08 01:09:48 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:48.875803 | orchestrator | 2026-01-08 01:09:48 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:48.876908 | orchestrator | 2026-01-08 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:51.935871 | orchestrator | 2026-01-08 01:09:51 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:51.938295 | orchestrator | 2026-01-08 01:09:51 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:51.938574 | orchestrator | 2026-01-08 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:54.978217 | orchestrator | 2026-01-08 01:09:54 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:54.979656 | orchestrator | 2026-01-08 01:09:54 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:54.979718 | orchestrator | 2026-01-08 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:09:58.033370 | orchestrator | 2026-01-08 01:09:58 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:09:58.034514 | orchestrator | 2026-01-08 01:09:58 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:09:58.034554 | orchestrator | 2026-01-08 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:01.089777 | orchestrator | 2026-01-08 01:10:01 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:01.090570 | orchestrator | 2026-01-08 01:10:01 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:01.090590 | orchestrator | 2026-01-08 01:10:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:04.128016 | orchestrator | 2026-01-08 01:10:04 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:04.132368 | orchestrator | 2026-01-08 01:10:04 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:04.132415 | orchestrator | 2026-01-08 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:07.173654 | orchestrator | 2026-01-08 01:10:07 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:07.175469 | orchestrator | 2026-01-08 01:10:07 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:07.175705 | orchestrator | 2026-01-08 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:10.222775 | orchestrator | 2026-01-08 01:10:10 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:10.223799 | orchestrator | 2026-01-08 01:10:10 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:10.223946 | orchestrator | 2026-01-08 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:13.262318 | orchestrator | 2026-01-08 01:10:13 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:13.264971 | orchestrator | 2026-01-08 01:10:13 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:13.265018 | orchestrator | 2026-01-08 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:16.306636 | orchestrator | 2026-01-08 01:10:16 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:16.308983 | orchestrator | 2026-01-08 01:10:16 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:16.309166 | orchestrator | 2026-01-08 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:19.355098 | orchestrator | 2026-01-08 01:10:19 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:19.355155 | orchestrator | 2026-01-08 01:10:19 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:19.356619 | orchestrator | 2026-01-08 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:22.402366 | orchestrator | 2026-01-08 01:10:22 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:22.403065 | orchestrator | 2026-01-08 01:10:22 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:22.403098 | orchestrator | 2026-01-08 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:25.462643 | orchestrator | 2026-01-08 01:10:25 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:25.462701 | orchestrator | 2026-01-08 01:10:25 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:25.462709 | orchestrator | 2026-01-08 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:28.493358 | orchestrator | 2026-01-08 01:10:28 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:28.494797 | orchestrator | 2026-01-08 01:10:28 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:28.494847 | orchestrator | 2026-01-08 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:31.542892 | orchestrator | 2026-01-08 01:10:31 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:31.544963 | orchestrator | 2026-01-08 01:10:31 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state STARTED 2026-01-08 01:10:31.545077 | orchestrator | 2026-01-08 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:34.596414 | orchestrator | 2026-01-08 01:10:34 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:34.598263 | orchestrator | 2026-01-08 01:10:34 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:10:34.599530 | orchestrator | 2026-01-08 01:10:34 | INFO  | Task 8d3e2fef-ca8b-4470-9a03-5d353720b99f is in state SUCCESS 2026-01-08 01:10:34.599980 | orchestrator | 2026-01-08 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:37.637241 | orchestrator | 2026-01-08 01:10:37 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:37.637428 | orchestrator | 2026-01-08 01:10:37 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:10:37.637507 | orchestrator | 2026-01-08 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:40.681580 | orchestrator | 2026-01-08 01:10:40 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:40.683382 | orchestrator | 2026-01-08 01:10:40 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:10:40.684124 | orchestrator | 2026-01-08 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:43.728310 | orchestrator | 2026-01-08 01:10:43 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:43.730693 | orchestrator | 2026-01-08 01:10:43 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:10:43.730736 | orchestrator | 2026-01-08 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:46.777801 | orchestrator | 2026-01-08 01:10:46 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:46.781092 | orchestrator | 2026-01-08 01:10:46 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:10:46.781147 | orchestrator | 2026-01-08 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:49.834899 | orchestrator | 2026-01-08 01:10:49 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:49.837167 | orchestrator | 2026-01-08 01:10:49 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:10:49.837242 | orchestrator | 2026-01-08 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:52.876611 | orchestrator | 2026-01-08 01:10:52 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:52.878125 | orchestrator | 2026-01-08 01:10:52 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:10:52.878181 | orchestrator | 2026-01-08 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:55.933121 | orchestrator | 2026-01-08 01:10:55 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:55.935575 | orchestrator | 2026-01-08 01:10:55 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:10:55.935755 | orchestrator | 2026-01-08 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:10:58.983804 | orchestrator | 2026-01-08 01:10:58 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:10:58.986317 | orchestrator | 2026-01-08 01:10:58 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:10:58.986365 | orchestrator | 2026-01-08 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:02.041894 | orchestrator | 2026-01-08 01:11:02 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:02.043584 | orchestrator | 2026-01-08 01:11:02 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:02.043765 | orchestrator | 2026-01-08 01:11:02 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:05.100336 | orchestrator | 2026-01-08 01:11:05 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:05.102905 | orchestrator | 2026-01-08 01:11:05 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:05.102961 | orchestrator | 2026-01-08 01:11:05 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:08.153547 | orchestrator | 2026-01-08 01:11:08 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:08.153642 | orchestrator | 2026-01-08 01:11:08 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:08.153651 | orchestrator | 2026-01-08 01:11:08 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:11.198165 | orchestrator | 2026-01-08 01:11:11 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:11.199435 | orchestrator | 2026-01-08 01:11:11 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:11.199543 | orchestrator | 2026-01-08 01:11:11 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:14.247620 | orchestrator | 2026-01-08 01:11:14 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:14.249050 | orchestrator | 2026-01-08 01:11:14 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:14.249328 | orchestrator | 2026-01-08 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:17.302270 | orchestrator | 2026-01-08 01:11:17 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:17.304303 | orchestrator | 2026-01-08 01:11:17 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:17.304355 | orchestrator | 2026-01-08 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:20.365816 | orchestrator | 2026-01-08 01:11:20 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:20.367792 | orchestrator | 2026-01-08 01:11:20 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:20.367875 | orchestrator | 2026-01-08 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:23.427066 | orchestrator | 2026-01-08 01:11:23 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:23.427633 | orchestrator | 2026-01-08 01:11:23 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:23.427649 | orchestrator | 2026-01-08 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:26.483382 | orchestrator | 2026-01-08 01:11:26 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:26.484767 | orchestrator | 2026-01-08 01:11:26 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:26.485094 | orchestrator | 2026-01-08 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:29.543519 | orchestrator | 2026-01-08 01:11:29 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:29.543626 | orchestrator | 2026-01-08 01:11:29 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:29.543635 | orchestrator | 2026-01-08 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:32.592220 | orchestrator | 2026-01-08 01:11:32 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:32.593862 | orchestrator | 2026-01-08 01:11:32 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:32.593909 | orchestrator | 2026-01-08 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:35.646972 | orchestrator | 2026-01-08 01:11:35 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:35.647916 | orchestrator | 2026-01-08 01:11:35 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:35.647961 | orchestrator | 2026-01-08 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:38.705434 | orchestrator | 2026-01-08 01:11:38 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:38.707488 | orchestrator | 2026-01-08 01:11:38 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:38.707524 | orchestrator | 2026-01-08 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:41.754756 | orchestrator | 2026-01-08 01:11:41 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:41.757220 | orchestrator | 2026-01-08 01:11:41 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:41.757727 | orchestrator | 2026-01-08 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:44.801796 | orchestrator | 2026-01-08 01:11:44 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:44.803323 | orchestrator | 2026-01-08 01:11:44 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:44.803414 | orchestrator | 2026-01-08 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:47.848519 | orchestrator | 2026-01-08 01:11:47 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:47.849345 | orchestrator | 2026-01-08 01:11:47 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:47.849388 | orchestrator | 2026-01-08 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:50.895771 | orchestrator | 2026-01-08 01:11:50 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:50.898076 | orchestrator | 2026-01-08 01:11:50 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:50.898462 | orchestrator | 2026-01-08 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:53.948067 | orchestrator | 2026-01-08 01:11:53 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:53.949679 | orchestrator | 2026-01-08 01:11:53 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:53.949710 | orchestrator | 2026-01-08 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:11:56.986470 | orchestrator | 2026-01-08 01:11:56 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:11:56.986554 | orchestrator | 2026-01-08 01:11:56 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:11:56.986565 | orchestrator | 2026-01-08 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:00.025693 | orchestrator | 2026-01-08 01:12:00 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:00.027318 | orchestrator | 2026-01-08 01:12:00 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:00.027808 | orchestrator | 2026-01-08 01:12:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:03.073400 | orchestrator | 2026-01-08 01:12:03 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:03.075124 | orchestrator | 2026-01-08 01:12:03 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:03.075183 | orchestrator | 2026-01-08 01:12:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:06.121868 | orchestrator | 2026-01-08 01:12:06 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:06.122563 | orchestrator | 2026-01-08 01:12:06 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:06.122643 | orchestrator | 2026-01-08 01:12:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:09.172371 | orchestrator | 2026-01-08 01:12:09 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:09.174307 | orchestrator | 2026-01-08 01:12:09 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:09.174624 | orchestrator | 2026-01-08 01:12:09 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:12.217363 | orchestrator | 2026-01-08 01:12:12 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:12.220289 | orchestrator | 2026-01-08 01:12:12 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:12.220335 | orchestrator | 2026-01-08 01:12:12 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:15.261403 | orchestrator | 2026-01-08 01:12:15 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:15.262327 | orchestrator | 2026-01-08 01:12:15 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:15.262358 | orchestrator | 2026-01-08 01:12:15 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:18.312410 | orchestrator | 2026-01-08 01:12:18 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:18.314278 | orchestrator | 2026-01-08 01:12:18 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:18.314324 | orchestrator | 2026-01-08 01:12:18 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:21.347553 | orchestrator | 2026-01-08 01:12:21 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:21.351173 | orchestrator | 2026-01-08 01:12:21 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:21.351221 | orchestrator | 2026-01-08 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:24.388992 | orchestrator | 2026-01-08 01:12:24 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:24.389570 | orchestrator | 2026-01-08 01:12:24 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:24.389594 | orchestrator | 2026-01-08 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:27.435944 | orchestrator | 2026-01-08 01:12:27 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:27.438063 | orchestrator | 2026-01-08 01:12:27 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:27.438114 | orchestrator | 2026-01-08 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:30.480734 | orchestrator | 2026-01-08 01:12:30 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:30.482372 | orchestrator | 2026-01-08 01:12:30 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:30.482414 | orchestrator | 2026-01-08 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:33.530936 | orchestrator | 2026-01-08 01:12:33 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:33.533133 | orchestrator | 2026-01-08 01:12:33 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:33.533163 | orchestrator | 2026-01-08 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:36.579298 | orchestrator | 2026-01-08 01:12:36 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:36.580832 | orchestrator | 2026-01-08 01:12:36 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:36.580895 | orchestrator | 2026-01-08 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:39.620359 | orchestrator | 2026-01-08 01:12:39 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:39.622433 | orchestrator | 2026-01-08 01:12:39 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:39.622515 | orchestrator | 2026-01-08 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:42.650458 | orchestrator | 2026-01-08 01:12:42 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:42.652422 | orchestrator | 2026-01-08 01:12:42 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:42.652476 | orchestrator | 2026-01-08 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:45.684259 | orchestrator | 2026-01-08 01:12:45 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:45.685610 | orchestrator | 2026-01-08 01:12:45 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:45.685657 | orchestrator | 2026-01-08 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:48.734777 | orchestrator | 2026-01-08 01:12:48 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:48.736236 | orchestrator | 2026-01-08 01:12:48 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:48.736268 | orchestrator | 2026-01-08 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:51.769685 | orchestrator | 2026-01-08 01:12:51 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:51.772779 | orchestrator | 2026-01-08 01:12:51 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:51.772830 | orchestrator | 2026-01-08 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:54.807075 | orchestrator | 2026-01-08 01:12:54 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:54.807820 | orchestrator | 2026-01-08 01:12:54 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:54.807863 | orchestrator | 2026-01-08 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:12:57.842898 | orchestrator | 2026-01-08 01:12:57 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:12:57.845314 | orchestrator | 2026-01-08 01:12:57 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:12:57.845369 | orchestrator | 2026-01-08 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:00.889436 | orchestrator | 2026-01-08 01:13:00 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:00.891825 | orchestrator | 2026-01-08 01:13:00 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:00.891903 | orchestrator | 2026-01-08 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:03.938464 | orchestrator | 2026-01-08 01:13:03 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:03.940492 | orchestrator | 2026-01-08 01:13:03 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:03.940539 | orchestrator | 2026-01-08 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:06.990187 | orchestrator | 2026-01-08 01:13:06 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:06.991475 | orchestrator | 2026-01-08 01:13:06 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:06.991497 | orchestrator | 2026-01-08 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:10.050676 | orchestrator | 2026-01-08 01:13:10 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:10.054294 | orchestrator | 2026-01-08 01:13:10 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:10.054336 | orchestrator | 2026-01-08 01:13:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:13.107320 | orchestrator | 2026-01-08 01:13:13 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:13.108435 | orchestrator | 2026-01-08 01:13:13 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:13.108480 | orchestrator | 2026-01-08 01:13:13 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:16.145924 | orchestrator | 2026-01-08 01:13:16 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:16.146597 | orchestrator | 2026-01-08 01:13:16 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:16.146628 | orchestrator | 2026-01-08 01:13:16 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:19.184290 | orchestrator | 2026-01-08 01:13:19 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:19.185160 | orchestrator | 2026-01-08 01:13:19 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:19.185204 | orchestrator | 2026-01-08 01:13:19 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:22.227518 | orchestrator | 2026-01-08 01:13:22 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:22.229074 | orchestrator | 2026-01-08 01:13:22 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:22.229142 | orchestrator | 2026-01-08 01:13:22 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:25.268192 | orchestrator | 2026-01-08 01:13:25 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:25.268584 | orchestrator | 2026-01-08 01:13:25 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:25.268608 | orchestrator | 2026-01-08 01:13:25 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:28.324730 | orchestrator | 2026-01-08 01:13:28 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:28.325290 | orchestrator | 2026-01-08 01:13:28 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:28.325323 | orchestrator | 2026-01-08 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:31.357993 | orchestrator | 2026-01-08 01:13:31 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:31.360168 | orchestrator | 2026-01-08 01:13:31 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:31.360218 | orchestrator | 2026-01-08 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:34.403873 | orchestrator | 2026-01-08 01:13:34 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:34.406461 | orchestrator | 2026-01-08 01:13:34 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:34.406515 | orchestrator | 2026-01-08 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:37.468945 | orchestrator | 2026-01-08 01:13:37 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:37.469453 | orchestrator | 2026-01-08 01:13:37 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:37.469471 | orchestrator | 2026-01-08 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:40.509120 | orchestrator | 2026-01-08 01:13:40 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:40.509446 | orchestrator | 2026-01-08 01:13:40 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:40.509470 | orchestrator | 2026-01-08 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:43.539584 | orchestrator | 2026-01-08 01:13:43 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:43.540856 | orchestrator | 2026-01-08 01:13:43 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:43.540944 | orchestrator | 2026-01-08 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:46.578666 | orchestrator | 2026-01-08 01:13:46 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:46.579305 | orchestrator | 2026-01-08 01:13:46 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:46.579328 | orchestrator | 2026-01-08 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:49.621809 | orchestrator | 2026-01-08 01:13:49 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:49.626738 | orchestrator | 2026-01-08 01:13:49 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:49.626792 | orchestrator | 2026-01-08 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:52.674801 | orchestrator | 2026-01-08 01:13:52 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:52.677512 | orchestrator | 2026-01-08 01:13:52 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:52.677561 | orchestrator | 2026-01-08 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:55.722887 | orchestrator | 2026-01-08 01:13:55 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:55.722985 | orchestrator | 2026-01-08 01:13:55 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:55.723044 | orchestrator | 2026-01-08 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:13:58.765469 | orchestrator | 2026-01-08 01:13:58 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:13:58.765795 | orchestrator | 2026-01-08 01:13:58 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:13:58.765815 | orchestrator | 2026-01-08 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:01.815354 | orchestrator | 2026-01-08 01:14:01 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:01.820747 | orchestrator | 2026-01-08 01:14:01 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:01.820805 | orchestrator | 2026-01-08 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:04.871069 | orchestrator | 2026-01-08 01:14:04 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:04.871123 | orchestrator | 2026-01-08 01:14:04 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:04.871128 | orchestrator | 2026-01-08 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:07.922944 | orchestrator | 2026-01-08 01:14:07 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:07.924327 | orchestrator | 2026-01-08 01:14:07 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:07.924472 | orchestrator | 2026-01-08 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:10.967728 | orchestrator | 2026-01-08 01:14:10 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:10.969696 | orchestrator | 2026-01-08 01:14:10 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:10.969762 | orchestrator | 2026-01-08 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:14.017868 | orchestrator | 2026-01-08 01:14:14 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:14.021098 | orchestrator | 2026-01-08 01:14:14 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:14.021144 | orchestrator | 2026-01-08 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:17.076618 | orchestrator | 2026-01-08 01:14:17 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:17.076702 | orchestrator | 2026-01-08 01:14:17 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:17.076713 | orchestrator | 2026-01-08 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:20.107520 | orchestrator | 2026-01-08 01:14:20 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:20.110514 | orchestrator | 2026-01-08 01:14:20 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:20.111174 | orchestrator | 2026-01-08 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:23.151921 | orchestrator | 2026-01-08 01:14:23 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:23.153135 | orchestrator | 2026-01-08 01:14:23 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:23.153290 | orchestrator | 2026-01-08 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:26.196674 | orchestrator | 2026-01-08 01:14:26 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:26.198584 | orchestrator | 2026-01-08 01:14:26 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:26.198792 | orchestrator | 2026-01-08 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:29.239770 | orchestrator | 2026-01-08 01:14:29 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:29.242183 | orchestrator | 2026-01-08 01:14:29 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:29.242669 | orchestrator | 2026-01-08 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:32.288206 | orchestrator | 2026-01-08 01:14:32 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:32.290440 | orchestrator | 2026-01-08 01:14:32 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:32.290514 | orchestrator | 2026-01-08 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:35.330339 | orchestrator | 2026-01-08 01:14:35 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:35.331561 | orchestrator | 2026-01-08 01:14:35 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:35.331614 | orchestrator | 2026-01-08 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:38.367806 | orchestrator | 2026-01-08 01:14:38 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:38.369446 | orchestrator | 2026-01-08 01:14:38 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:38.369490 | orchestrator | 2026-01-08 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:41.411504 | orchestrator | 2026-01-08 01:14:41 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:41.412061 | orchestrator | 2026-01-08 01:14:41 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:41.412095 | orchestrator | 2026-01-08 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:44.468553 | orchestrator | 2026-01-08 01:14:44 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:44.470565 | orchestrator | 2026-01-08 01:14:44 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:44.470626 | orchestrator | 2026-01-08 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:47.518327 | orchestrator | 2026-01-08 01:14:47 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:47.519721 | orchestrator | 2026-01-08 01:14:47 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:47.519765 | orchestrator | 2026-01-08 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:50.569675 | orchestrator | 2026-01-08 01:14:50 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:50.571392 | orchestrator | 2026-01-08 01:14:50 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:50.571467 | orchestrator | 2026-01-08 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:53.625630 | orchestrator | 2026-01-08 01:14:53 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:53.628181 | orchestrator | 2026-01-08 01:14:53 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:53.628224 | orchestrator | 2026-01-08 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:56.672288 | orchestrator | 2026-01-08 01:14:56 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:56.672804 | orchestrator | 2026-01-08 01:14:56 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:56.672905 | orchestrator | 2026-01-08 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:14:59.704137 | orchestrator | 2026-01-08 01:14:59 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:14:59.704809 | orchestrator | 2026-01-08 01:14:59 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:14:59.704833 | orchestrator | 2026-01-08 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:02.729241 | orchestrator | 2026-01-08 01:15:02 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:02.729517 | orchestrator | 2026-01-08 01:15:02 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:02.729533 | orchestrator | 2026-01-08 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:05.756452 | orchestrator | 2026-01-08 01:15:05 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:05.758580 | orchestrator | 2026-01-08 01:15:05 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:05.758660 | orchestrator | 2026-01-08 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:08.780087 | orchestrator | 2026-01-08 01:15:08 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:08.780995 | orchestrator | 2026-01-08 01:15:08 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:08.781034 | orchestrator | 2026-01-08 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:11.806620 | orchestrator | 2026-01-08 01:15:11 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:11.808386 | orchestrator | 2026-01-08 01:15:11 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:11.808564 | orchestrator | 2026-01-08 01:15:11 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:14.857618 | orchestrator | 2026-01-08 01:15:14 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:14.858632 | orchestrator | 2026-01-08 01:15:14 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:14.858729 | orchestrator | 2026-01-08 01:15:14 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:17.912811 | orchestrator | 2026-01-08 01:15:17 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:17.914584 | orchestrator | 2026-01-08 01:15:17 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:17.914619 | orchestrator | 2026-01-08 01:15:17 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:20.965042 | orchestrator | 2026-01-08 01:15:20 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:20.968710 | orchestrator | 2026-01-08 01:15:20 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:20.968784 | orchestrator | 2026-01-08 01:15:20 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:24.016503 | orchestrator | 2026-01-08 01:15:24 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:24.016561 | orchestrator | 2026-01-08 01:15:24 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:24.016606 | orchestrator | 2026-01-08 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:27.055981 | orchestrator | 2026-01-08 01:15:27 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:27.058713 | orchestrator | 2026-01-08 01:15:27 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:27.058757 | orchestrator | 2026-01-08 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:30.105651 | orchestrator | 2026-01-08 01:15:30 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:30.107534 | orchestrator | 2026-01-08 01:15:30 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:30.107587 | orchestrator | 2026-01-08 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:33.156365 | orchestrator | 2026-01-08 01:15:33 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:33.158453 | orchestrator | 2026-01-08 01:15:33 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state STARTED 2026-01-08 01:15:33.158510 | orchestrator | 2026-01-08 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:36.199934 | orchestrator | 2026-01-08 01:15:36 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:36.205417 | orchestrator | 2026-01-08 01:15:36 | INFO  | Task da0d7197-e45d-461d-9bd0-ec9f1729441f is in state SUCCESS 2026-01-08 01:15:36.207561 | orchestrator | 2026-01-08 01:15:36.207604 | orchestrator | 2026-01-08 01:15:36.207616 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:15:36.207624 | orchestrator | 2026-01-08 01:15:36.207630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:15:36.207636 | orchestrator | Thursday 08 January 2026 01:08:37 +0000 (0:00:00.242) 0:00:00.242 ****** 2026-01-08 01:15:36.207642 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.207648 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:15:36.207654 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:15:36.207660 | orchestrator | 2026-01-08 01:15:36.207667 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:15:36.207674 | orchestrator | Thursday 08 January 2026 01:08:37 +0000 (0:00:00.384) 0:00:00.626 ****** 2026-01-08 01:15:36.207681 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-08 01:15:36.207688 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-08 01:15:36.207694 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-08 01:15:36.207698 | orchestrator | 2026-01-08 01:15:36.207702 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-08 01:15:36.207706 | orchestrator | 2026-01-08 01:15:36.207710 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-08 01:15:36.207714 | orchestrator | Thursday 08 January 2026 01:08:38 +0000 (0:00:00.519) 0:00:01.146 ****** 2026-01-08 01:15:36.207717 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.207721 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:15:36.207725 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:15:36.207729 | orchestrator | 2026-01-08 01:15:36.207733 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:15:36.207737 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:15:36.207742 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:15:36.207759 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:15:36.207763 | orchestrator | 2026-01-08 01:15:36.207766 | orchestrator | 2026-01-08 01:15:36.207770 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:15:36.207774 | orchestrator | Thursday 08 January 2026 01:10:32 +0000 (0:01:54.837) 0:01:55.983 ****** 2026-01-08 01:15:36.207778 | orchestrator | =============================================================================== 2026-01-08 01:15:36.207782 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 114.84s 2026-01-08 01:15:36.207785 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2026-01-08 01:15:36.207789 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-01-08 01:15:36.207793 | orchestrator | 2026-01-08 01:15:36.207797 | orchestrator | 2026-01-08 01:15:36.207801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:15:36.207804 | orchestrator | 2026-01-08 01:15:36.207824 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:15:36.207829 | orchestrator | Thursday 08 January 2026 01:10:37 +0000 (0:00:00.272) 0:00:00.272 ****** 2026-01-08 01:15:36.207833 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.207836 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:15:36.207841 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:15:36.207844 | orchestrator | 2026-01-08 01:15:36.207848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:15:36.207852 | orchestrator | Thursday 08 January 2026 01:10:38 +0000 (0:00:00.444) 0:00:00.716 ****** 2026-01-08 01:15:36.207856 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-08 01:15:36.207860 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-08 01:15:36.207864 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-08 01:15:36.207868 | orchestrator | 2026-01-08 01:15:36.207871 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-08 01:15:36.207875 | orchestrator | 2026-01-08 01:15:36.207879 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-08 01:15:36.207883 | orchestrator | Thursday 08 January 2026 01:10:38 +0000 (0:00:00.431) 0:00:01.147 ****** 2026-01-08 01:15:36.207887 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:36.207891 | orchestrator | 2026-01-08 01:15:36.207894 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting services] ************** 2026-01-08 01:15:36.207905 | orchestrator | Thursday 08 January 2026 01:10:39 +0000 (0:00:00.563) 0:00:01.710 ****** 2026-01-08 01:15:36.207916 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-08 01:15:36.207924 | orchestrator | 2026-01-08 01:15:36.207928 | orchestrator | TASK [service-ks-register : octavia | Creating/deleting endpoints] ************* 2026-01-08 01:15:36.207931 | orchestrator | Thursday 08 January 2026 01:10:42 +0000 (0:00:03.528) 0:00:05.239 ****** 2026-01-08 01:15:36.207987 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-08 01:15:36.207993 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-08 01:15:36.207997 | orchestrator | 2026-01-08 01:15:36.208001 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-08 01:15:36.208005 | orchestrator | Thursday 08 January 2026 01:10:49 +0000 (0:00:06.374) 0:00:11.614 ****** 2026-01-08 01:15:36.208009 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-08 01:15:36.208013 | orchestrator | 2026-01-08 01:15:36.208017 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-08 01:15:36.208020 | orchestrator | Thursday 08 January 2026 01:10:52 +0000 (0:00:03.264) 0:00:14.879 ****** 2026-01-08 01:15:36.208032 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:15:36.208041 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-08 01:15:36.208045 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-08 01:15:36.208049 | orchestrator | 2026-01-08 01:15:36.208053 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-08 01:15:36.208057 | orchestrator | Thursday 08 January 2026 01:10:59 +0000 (0:00:07.362) 0:00:22.242 ****** 2026-01-08 01:15:36.208061 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:15:36.208065 | orchestrator | 2026-01-08 01:15:36.208068 | orchestrator | TASK [service-ks-register : octavia | Granting/revoking user roles] ************ 2026-01-08 01:15:36.208072 | orchestrator | Thursday 08 January 2026 01:11:03 +0000 (0:00:03.274) 0:00:25.517 ****** 2026-01-08 01:15:36.208076 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-08 01:15:36.208080 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-08 01:15:36.208084 | orchestrator | 2026-01-08 01:15:36.208087 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-08 01:15:36.208091 | orchestrator | Thursday 08 January 2026 01:11:10 +0000 (0:00:07.363) 0:00:32.880 ****** 2026-01-08 01:15:36.208095 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-08 01:15:36.208099 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-08 01:15:36.208102 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-08 01:15:36.208106 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-08 01:15:36.208110 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-08 01:15:36.208114 | orchestrator | 2026-01-08 01:15:36.208117 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-08 01:15:36.208121 | orchestrator | Thursday 08 January 2026 01:11:28 +0000 (0:00:18.399) 0:00:51.280 ****** 2026-01-08 01:15:36.208125 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:36.208129 | orchestrator | 2026-01-08 01:15:36.208133 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-08 01:15:36.208136 | orchestrator | Thursday 08 January 2026 01:11:29 +0000 (0:00:00.572) 0:00:51.852 ****** 2026-01-08 01:15:36.208140 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208144 | orchestrator | 2026-01-08 01:15:36.208148 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-08 01:15:36.208152 | orchestrator | Thursday 08 January 2026 01:11:35 +0000 (0:00:06.142) 0:00:57.994 ****** 2026-01-08 01:15:36.208205 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208210 | orchestrator | 2026-01-08 01:15:36.208218 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-08 01:15:36.208227 | orchestrator | Thursday 08 January 2026 01:11:40 +0000 (0:00:05.307) 0:01:03.302 ****** 2026-01-08 01:15:36.208235 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.208242 | orchestrator | 2026-01-08 01:15:36.208248 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-08 01:15:36.208255 | orchestrator | Thursday 08 January 2026 01:11:44 +0000 (0:00:03.177) 0:01:06.479 ****** 2026-01-08 01:15:36.208262 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-08 01:15:36.208267 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-08 01:15:36.208271 | orchestrator | 2026-01-08 01:15:36.208275 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-08 01:15:36.208279 | orchestrator | Thursday 08 January 2026 01:11:54 +0000 (0:00:10.047) 0:01:16.527 ****** 2026-01-08 01:15:36.208284 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-08 01:15:36.208289 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-08 01:15:36.208297 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-08 01:15:36.208302 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-08 01:15:36.208306 | orchestrator | 2026-01-08 01:15:36.208310 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-08 01:15:36.208319 | orchestrator | Thursday 08 January 2026 01:12:09 +0000 (0:00:15.695) 0:01:32.223 ****** 2026-01-08 01:15:36.208323 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208328 | orchestrator | 2026-01-08 01:15:36.208332 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-08 01:15:36.208337 | orchestrator | Thursday 08 January 2026 01:12:14 +0000 (0:00:04.582) 0:01:36.806 ****** 2026-01-08 01:15:36.208341 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208346 | orchestrator | 2026-01-08 01:15:36.208369 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-08 01:15:36.208379 | orchestrator | Thursday 08 January 2026 01:12:19 +0000 (0:00:05.035) 0:01:41.842 ****** 2026-01-08 01:15:36.208386 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:36.208393 | orchestrator | 2026-01-08 01:15:36.208399 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-08 01:15:36.208405 | orchestrator | Thursday 08 January 2026 01:12:19 +0000 (0:00:00.214) 0:01:42.056 ****** 2026-01-08 01:15:36.208411 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.208418 | orchestrator | 2026-01-08 01:15:36.208424 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-08 01:15:36.208431 | orchestrator | Thursday 08 January 2026 01:12:23 +0000 (0:00:03.886) 0:01:45.943 ****** 2026-01-08 01:15:36.208441 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:36.208446 | orchestrator | 2026-01-08 01:15:36.208450 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-08 01:15:36.208455 | orchestrator | Thursday 08 January 2026 01:12:25 +0000 (0:00:01.722) 0:01:47.665 ****** 2026-01-08 01:15:36.208460 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208464 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.208468 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.208473 | orchestrator | 2026-01-08 01:15:36.208477 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-08 01:15:36.208481 | orchestrator | Thursday 08 January 2026 01:12:30 +0000 (0:00:05.463) 0:01:53.129 ****** 2026-01-08 01:15:36.208486 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.208490 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.208494 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208499 | orchestrator | 2026-01-08 01:15:36.208503 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-08 01:15:36.208507 | orchestrator | Thursday 08 January 2026 01:12:34 +0000 (0:00:04.235) 0:01:57.365 ****** 2026-01-08 01:15:36.208512 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208516 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.208520 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.208525 | orchestrator | 2026-01-08 01:15:36.208529 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-08 01:15:36.208534 | orchestrator | Thursday 08 January 2026 01:12:35 +0000 (0:00:00.758) 0:01:58.123 ****** 2026-01-08 01:15:36.208538 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.208543 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:15:36.208547 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:15:36.208551 | orchestrator | 2026-01-08 01:15:36.208555 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-08 01:15:36.208560 | orchestrator | Thursday 08 January 2026 01:12:37 +0000 (0:00:01.837) 0:01:59.961 ****** 2026-01-08 01:15:36.208568 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.208573 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.208577 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208581 | orchestrator | 2026-01-08 01:15:36.208586 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-08 01:15:36.208590 | orchestrator | Thursday 08 January 2026 01:12:39 +0000 (0:00:02.116) 0:02:02.077 ****** 2026-01-08 01:15:36.208594 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208599 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.208603 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.208608 | orchestrator | 2026-01-08 01:15:36.208612 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-08 01:15:36.208617 | orchestrator | Thursday 08 January 2026 01:12:41 +0000 (0:00:01.440) 0:02:03.518 ****** 2026-01-08 01:15:36.208621 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.208625 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.208630 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208634 | orchestrator | 2026-01-08 01:15:36.208638 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-08 01:15:36.208643 | orchestrator | Thursday 08 January 2026 01:12:43 +0000 (0:00:02.514) 0:02:06.032 ****** 2026-01-08 01:15:36.208663 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.208671 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.208677 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.208683 | orchestrator | 2026-01-08 01:15:36.208689 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-08 01:15:36.208695 | orchestrator | Thursday 08 January 2026 01:12:45 +0000 (0:00:01.783) 0:02:07.815 ****** 2026-01-08 01:15:36.208701 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.208707 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:15:36.208712 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:15:36.208718 | orchestrator | 2026-01-08 01:15:36.208723 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-08 01:15:36.208729 | orchestrator | Thursday 08 January 2026 01:12:45 +0000 (0:00:00.636) 0:02:08.452 ****** 2026-01-08 01:15:36.208735 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.208741 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:15:36.208748 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:15:36.208754 | orchestrator | 2026-01-08 01:15:36.208761 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-08 01:15:36.208766 | orchestrator | Thursday 08 January 2026 01:12:49 +0000 (0:00:03.411) 0:02:11.863 ****** 2026-01-08 01:15:36.208771 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:36.208775 | orchestrator | 2026-01-08 01:15:36.208779 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-08 01:15:36.208793 | orchestrator | Thursday 08 January 2026 01:12:50 +0000 (0:00:00.781) 0:02:12.645 ****** 2026-01-08 01:15:36.208798 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.208802 | orchestrator | 2026-01-08 01:15:36.208807 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-08 01:15:36.208811 | orchestrator | Thursday 08 January 2026 01:12:53 +0000 (0:00:03.239) 0:02:15.884 ****** 2026-01-08 01:15:36.208831 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.208836 | orchestrator | 2026-01-08 01:15:36.208840 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-08 01:15:36.208845 | orchestrator | Thursday 08 January 2026 01:12:56 +0000 (0:00:03.221) 0:02:19.106 ****** 2026-01-08 01:15:36.208849 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-08 01:15:36.208854 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-08 01:15:36.208859 | orchestrator | 2026-01-08 01:15:36.208863 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-08 01:15:36.208868 | orchestrator | Thursday 08 January 2026 01:13:04 +0000 (0:00:08.013) 0:02:27.119 ****** 2026-01-08 01:15:36.208876 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.208881 | orchestrator | 2026-01-08 01:15:36.208885 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-08 01:15:36.208890 | orchestrator | Thursday 08 January 2026 01:13:07 +0000 (0:00:03.160) 0:02:30.279 ****** 2026-01-08 01:15:36.208897 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:36.208901 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:15:36.208905 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:15:36.208909 | orchestrator | 2026-01-08 01:15:36.208913 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-08 01:15:36.208916 | orchestrator | Thursday 08 January 2026 01:13:08 +0000 (0:00:00.325) 0:02:30.604 ****** 2026-01-08 01:15:36.208923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.208930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.208934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.208979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.208990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.208994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.208999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209046 | orchestrator | 2026-01-08 01:15:36.209050 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-08 01:15:36.209054 | orchestrator | Thursday 08 January 2026 01:13:10 +0000 (0:00:02.439) 0:02:33.044 ****** 2026-01-08 01:15:36.209058 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:36.209062 | orchestrator | 2026-01-08 01:15:36.209065 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-08 01:15:36.209069 | orchestrator | Thursday 08 January 2026 01:13:10 +0000 (0:00:00.140) 0:02:33.184 ****** 2026-01-08 01:15:36.209073 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:36.209077 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:36.209081 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:36.209084 | orchestrator | 2026-01-08 01:15:36.209088 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-08 01:15:36.209092 | orchestrator | Thursday 08 January 2026 01:13:11 +0000 (0:00:00.490) 0:02:33.675 ****** 2026-01-08 01:15:36.209098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.209109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.209113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.209125 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:36.209129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.209138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.209142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.209157 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:36.209161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.209165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.209173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.209491 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:36.209498 | orchestrator | 2026-01-08 01:15:36.209503 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-08 01:15:36.209509 | orchestrator | Thursday 08 January 2026 01:13:11 +0000 (0:00:00.679) 0:02:34.355 ****** 2026-01-08 01:15:36.209515 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:36.209521 | orchestrator | 2026-01-08 01:15:36.209527 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-08 01:15:36.209533 | orchestrator | Thursday 08 January 2026 01:13:12 +0000 (0:00:00.547) 0:02:34.903 ****** 2026-01-08 01:15:36.209540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.209547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.209564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.209578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.209583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.209587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.209591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.209656 | orchestrator | 2026-01-08 01:15:36.209663 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-08 01:15:36.209669 | orchestrator | Thursday 08 January 2026 01:13:17 +0000 (0:00:04.895) 0:02:39.798 ****** 2026-01-08 01:15:36.209682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.209692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.209699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.209722 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:36.209726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.209732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.209739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.209753 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:36.209757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.209761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.209768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.209782 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:36.209786 | orchestrator | 2026-01-08 01:15:36.209808 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-08 01:15:36.209812 | orchestrator | Thursday 08 January 2026 01:13:18 +0000 (0:00:01.148) 0:02:40.946 ****** 2026-01-08 01:15:36.209818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.209822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.209826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.209843 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:36.209847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.209853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.209857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.209869 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:36.209875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.209880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.209886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.209894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.209967 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:36.209978 | orchestrator | 2026-01-08 01:15:36.209982 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-08 01:15:36.209986 | orchestrator | Thursday 08 January 2026 01:13:19 +0000 (0:00:00.915) 0:02:41.862 ****** 2026-01-08 01:15:36.209992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.210001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.210009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.210046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.210052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.210058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.210062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210114 | orchestrator | 2026-01-08 01:15:36.210119 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-08 01:15:36.210124 | orchestrator | Thursday 08 January 2026 01:13:24 +0000 (0:00:04.825) 0:02:46.687 ****** 2026-01-08 01:15:36.210128 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-08 01:15:36.210133 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-08 01:15:36.210138 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-08 01:15:36.210142 | orchestrator | 2026-01-08 01:15:36.210146 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-08 01:15:36.210151 | orchestrator | Thursday 08 January 2026 01:13:26 +0000 (0:00:02.272) 0:02:48.960 ****** 2026-01-08 01:15:36.210157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.210168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.210182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.210194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.210200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.210207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.210213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210275 | orchestrator | 2026-01-08 01:15:36.210282 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-08 01:15:36.210289 | orchestrator | Thursday 08 January 2026 01:13:44 +0000 (0:00:17.706) 0:03:06.667 ****** 2026-01-08 01:15:36.210294 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.210298 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.210303 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.210307 | orchestrator | 2026-01-08 01:15:36.210311 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-08 01:15:36.210316 | orchestrator | Thursday 08 January 2026 01:13:45 +0000 (0:00:01.703) 0:03:08.370 ****** 2026-01-08 01:15:36.210320 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-08 01:15:36.210325 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-08 01:15:36.210329 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-08 01:15:36.210333 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-08 01:15:36.210338 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-08 01:15:36.210344 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-08 01:15:36.210356 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-08 01:15:36.210365 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-08 01:15:36.210371 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-08 01:15:36.210377 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-08 01:15:36.210383 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-08 01:15:36.210389 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-08 01:15:36.210395 | orchestrator | 2026-01-08 01:15:36.210401 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-08 01:15:36.210408 | orchestrator | Thursday 08 January 2026 01:13:51 +0000 (0:00:05.142) 0:03:13.513 ****** 2026-01-08 01:15:36.210415 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-08 01:15:36.210421 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-08 01:15:36.210428 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-08 01:15:36.210434 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-08 01:15:36.210441 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-08 01:15:36.210447 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-08 01:15:36.210451 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-08 01:15:36.210455 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-08 01:15:36.210459 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-08 01:15:36.210463 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-08 01:15:36.210467 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-08 01:15:36.210470 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-08 01:15:36.210474 | orchestrator | 2026-01-08 01:15:36.210478 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-08 01:15:36.210482 | orchestrator | Thursday 08 January 2026 01:13:56 +0000 (0:00:05.923) 0:03:19.436 ****** 2026-01-08 01:15:36.210485 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-08 01:15:36.210489 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-08 01:15:36.210493 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-08 01:15:36.210497 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-08 01:15:36.210500 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-08 01:15:36.210504 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-08 01:15:36.210508 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-08 01:15:36.210516 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-08 01:15:36.210519 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-08 01:15:36.210523 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-08 01:15:36.210527 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-08 01:15:36.210531 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-08 01:15:36.210534 | orchestrator | 2026-01-08 01:15:36.210538 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-01-08 01:15:36.210542 | orchestrator | Thursday 08 January 2026 01:14:01 +0000 (0:00:04.827) 0:03:24.263 ****** 2026-01-08 01:15:36.210550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.210558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.210562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-08 01:15:36.210566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.210573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.210577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-08 01:15:36.210583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:36.210629 | orchestrator | 2026-01-08 01:15:36.210633 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-01-08 01:15:36.210636 | orchestrator | Thursday 08 January 2026 01:14:05 +0000 (0:00:03.855) 0:03:28.119 ****** 2026-01-08 01:15:36.210640 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:15:36.210644 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:36.210648 | orchestrator | } 2026-01-08 01:15:36.210654 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:15:36.210663 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:36.210671 | orchestrator | } 2026-01-08 01:15:36.210677 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:15:36.210682 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:36.210688 | orchestrator | } 2026-01-08 01:15:36.210694 | orchestrator | 2026-01-08 01:15:36.210699 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:15:36.210705 | orchestrator | Thursday 08 January 2026 01:14:06 +0000 (0:00:00.359) 0:03:28.478 ****** 2026-01-08 01:15:36.210712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.210724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.210731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.210740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.210751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.210755 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:36.210759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.210766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.210770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.210774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.210780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.210784 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:36.210792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2025.1', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-08 01:15:36.210796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2025.1', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-08 01:15:36.210802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2025.1', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.210806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2025.1', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-08 01:15:36.210810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2025.1', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-08 01:15:36.210814 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:36.210818 | orchestrator | 2026-01-08 01:15:36.210822 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-08 01:15:36.210826 | orchestrator | Thursday 08 January 2026 01:14:07 +0000 (0:00:01.366) 0:03:29.845 ****** 2026-01-08 01:15:36.210830 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:36.210834 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:36.210838 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:36.210841 | orchestrator | 2026-01-08 01:15:36.210845 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-08 01:15:36.210849 | orchestrator | Thursday 08 January 2026 01:14:07 +0000 (0:00:00.321) 0:03:30.166 ****** 2026-01-08 01:15:36.210852 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.210856 | orchestrator | 2026-01-08 01:15:36.210862 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-08 01:15:36.210866 | orchestrator | Thursday 08 January 2026 01:14:10 +0000 (0:00:02.471) 0:03:32.637 ****** 2026-01-08 01:15:36.210869 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.210873 | orchestrator | 2026-01-08 01:15:36.210877 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-08 01:15:36.210881 | orchestrator | Thursday 08 January 2026 01:14:12 +0000 (0:00:02.540) 0:03:35.178 ****** 2026-01-08 01:15:36.210884 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.210888 | orchestrator | 2026-01-08 01:15:36.210892 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-08 01:15:36.210896 | orchestrator | Thursday 08 January 2026 01:14:14 +0000 (0:00:02.027) 0:03:37.206 ****** 2026-01-08 01:15:36.210900 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.210906 | orchestrator | 2026-01-08 01:15:36.210910 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-08 01:15:36.210914 | orchestrator | Thursday 08 January 2026 01:14:17 +0000 (0:00:03.133) 0:03:40.340 ****** 2026-01-08 01:15:36.210918 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.210921 | orchestrator | 2026-01-08 01:15:36.210925 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-08 01:15:36.210931 | orchestrator | Thursday 08 January 2026 01:14:40 +0000 (0:00:22.511) 0:04:02.851 ****** 2026-01-08 01:15:36.210964 | orchestrator | 2026-01-08 01:15:36.210971 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-08 01:15:36.210975 | orchestrator | Thursday 08 January 2026 01:14:40 +0000 (0:00:00.075) 0:04:02.927 ****** 2026-01-08 01:15:36.210979 | orchestrator | 2026-01-08 01:15:36.210983 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-08 01:15:36.210987 | orchestrator | Thursday 08 January 2026 01:14:40 +0000 (0:00:00.072) 0:04:02.999 ****** 2026-01-08 01:15:36.210991 | orchestrator | 2026-01-08 01:15:36.210994 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-08 01:15:36.210998 | orchestrator | Thursday 08 January 2026 01:14:40 +0000 (0:00:00.266) 0:04:03.265 ****** 2026-01-08 01:15:36.211002 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.211006 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.211010 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.211013 | orchestrator | 2026-01-08 01:15:36.211017 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-08 01:15:36.211021 | orchestrator | Thursday 08 January 2026 01:14:55 +0000 (0:00:14.842) 0:04:18.108 ****** 2026-01-08 01:15:36.211025 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.211028 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.211032 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.211036 | orchestrator | 2026-01-08 01:15:36.211040 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-08 01:15:36.211044 | orchestrator | Thursday 08 January 2026 01:15:06 +0000 (0:00:11.019) 0:04:29.128 ****** 2026-01-08 01:15:36.211048 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.211051 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.211055 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.211059 | orchestrator | 2026-01-08 01:15:36.211063 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-08 01:15:36.211066 | orchestrator | Thursday 08 January 2026 01:15:13 +0000 (0:00:07.326) 0:04:36.455 ****** 2026-01-08 01:15:36.211070 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.211074 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.211078 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.211081 | orchestrator | 2026-01-08 01:15:36.211085 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-08 01:15:36.211089 | orchestrator | Thursday 08 January 2026 01:15:24 +0000 (0:00:10.050) 0:04:46.505 ****** 2026-01-08 01:15:36.211093 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:36.211097 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:36.211100 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:36.211104 | orchestrator | 2026-01-08 01:15:36.211108 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:15:36.211112 | orchestrator | testbed-node-0 : ok=58  changed=39  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-08 01:15:36.211117 | orchestrator | testbed-node-1 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 01:15:36.211121 | orchestrator | testbed-node-2 : ok=34  changed=23  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-08 01:15:36.211124 | orchestrator | 2026-01-08 01:15:36.211128 | orchestrator | 2026-01-08 01:15:36.211135 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:15:36.211139 | orchestrator | Thursday 08 January 2026 01:15:34 +0000 (0:00:10.347) 0:04:56.853 ****** 2026-01-08 01:15:36.211143 | orchestrator | =============================================================================== 2026-01-08 01:15:36.211147 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.51s 2026-01-08 01:15:36.211151 | orchestrator | octavia : Adding octavia related roles --------------------------------- 18.40s 2026-01-08 01:15:36.211155 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.71s 2026-01-08 01:15:36.211158 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.70s 2026-01-08 01:15:36.211162 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.84s 2026-01-08 01:15:36.211166 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.02s 2026-01-08 01:15:36.211170 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.35s 2026-01-08 01:15:36.211176 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.05s 2026-01-08 01:15:36.211180 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.05s 2026-01-08 01:15:36.211184 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.01s 2026-01-08 01:15:36.211187 | orchestrator | service-ks-register : octavia | Granting/revoking user roles ------------ 7.36s 2026-01-08 01:15:36.211191 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.36s 2026-01-08 01:15:36.211195 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 7.33s 2026-01-08 01:15:36.211199 | orchestrator | service-ks-register : octavia | Creating/deleting endpoints ------------- 6.38s 2026-01-08 01:15:36.211203 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 6.14s 2026-01-08 01:15:36.211206 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.92s 2026-01-08 01:15:36.211210 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.46s 2026-01-08 01:15:36.211214 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.31s 2026-01-08 01:15:36.211218 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.14s 2026-01-08 01:15:36.211225 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.04s 2026-01-08 01:15:36.211229 | orchestrator | 2026-01-08 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:39.246567 | orchestrator | 2026-01-08 01:15:39 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:39.246625 | orchestrator | 2026-01-08 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:42.305345 | orchestrator | 2026-01-08 01:15:42 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:42.305438 | orchestrator | 2026-01-08 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:45.366233 | orchestrator | 2026-01-08 01:15:45 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state STARTED 2026-01-08 01:15:45.366336 | orchestrator | 2026-01-08 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-01-08 01:15:48.420662 | orchestrator | 2026-01-08 01:15:48 | INFO  | Task e8a4428e-238b-4b64-ad71-1b915a3b45e0 is in state SUCCESS 2026-01-08 01:15:48.422207 | orchestrator | 2026-01-08 01:15:48.422271 | orchestrator | 2026-01-08 01:15:48.422282 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:15:48.422291 | orchestrator | 2026-01-08 01:15:48.422297 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-08 01:15:48.422305 | orchestrator | Thursday 08 January 2026 01:06:26 +0000 (0:00:00.391) 0:00:00.391 ****** 2026-01-08 01:15:48.422312 | orchestrator | changed: [testbed-manager] 2026-01-08 01:15:48.422320 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.422349 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.422355 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.422362 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.422368 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.422374 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.422379 | orchestrator | 2026-01-08 01:15:48.422386 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:15:48.422392 | orchestrator | Thursday 08 January 2026 01:06:27 +0000 (0:00:01.292) 0:00:01.683 ****** 2026-01-08 01:15:48.422398 | orchestrator | changed: [testbed-manager] 2026-01-08 01:15:48.422404 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.422410 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.422416 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.422422 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.422428 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.422435 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.422441 | orchestrator | 2026-01-08 01:15:48.422447 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:15:48.422454 | orchestrator | Thursday 08 January 2026 01:06:28 +0000 (0:00:00.675) 0:00:02.359 ****** 2026-01-08 01:15:48.422461 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-08 01:15:48.422467 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-08 01:15:48.422474 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-08 01:15:48.422481 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-08 01:15:48.422487 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-08 01:15:48.422493 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-08 01:15:48.422500 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-08 01:15:48.422505 | orchestrator | 2026-01-08 01:15:48.422512 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-08 01:15:48.422517 | orchestrator | 2026-01-08 01:15:48.422524 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-08 01:15:48.422530 | orchestrator | Thursday 08 January 2026 01:06:29 +0000 (0:00:00.997) 0:00:03.357 ****** 2026-01-08 01:15:48.422537 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:48.422543 | orchestrator | 2026-01-08 01:15:48.422549 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-08 01:15:48.422556 | orchestrator | Thursday 08 January 2026 01:06:30 +0000 (0:00:00.544) 0:00:03.902 ****** 2026-01-08 01:15:48.422563 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-08 01:15:48.422570 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-08 01:15:48.422576 | orchestrator | 2026-01-08 01:15:48.422583 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-08 01:15:48.422613 | orchestrator | Thursday 08 January 2026 01:06:34 +0000 (0:00:04.450) 0:00:08.352 ****** 2026-01-08 01:15:48.422621 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-08 01:15:48.422627 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-08 01:15:48.422634 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.422640 | orchestrator | 2026-01-08 01:15:48.422646 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-08 01:15:48.422652 | orchestrator | Thursday 08 January 2026 01:06:39 +0000 (0:00:04.567) 0:00:12.920 ****** 2026-01-08 01:15:48.422659 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.422665 | orchestrator | 2026-01-08 01:15:48.422671 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-08 01:15:48.422684 | orchestrator | Thursday 08 January 2026 01:06:40 +0000 (0:00:01.070) 0:00:13.990 ****** 2026-01-08 01:15:48.422690 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.422696 | orchestrator | 2026-01-08 01:15:48.422703 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-08 01:15:48.422715 | orchestrator | Thursday 08 January 2026 01:06:42 +0000 (0:00:01.948) 0:00:15.939 ****** 2026-01-08 01:15:48.422722 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.422728 | orchestrator | 2026-01-08 01:15:48.422734 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-08 01:15:48.422740 | orchestrator | Thursday 08 January 2026 01:06:45 +0000 (0:00:02.820) 0:00:18.759 ****** 2026-01-08 01:15:48.422746 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.422753 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.422759 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.422765 | orchestrator | 2026-01-08 01:15:48.422771 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-08 01:15:48.422777 | orchestrator | Thursday 08 January 2026 01:06:45 +0000 (0:00:00.324) 0:00:19.084 ****** 2026-01-08 01:15:48.422783 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:48.422789 | orchestrator | 2026-01-08 01:15:48.422794 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-08 01:15:48.422800 | orchestrator | Thursday 08 January 2026 01:07:13 +0000 (0:00:28.560) 0:00:47.645 ****** 2026-01-08 01:15:48.422806 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.422812 | orchestrator | 2026-01-08 01:15:48.422818 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-08 01:15:48.422824 | orchestrator | Thursday 08 January 2026 01:07:28 +0000 (0:00:14.284) 0:01:01.930 ****** 2026-01-08 01:15:48.422831 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:48.422838 | orchestrator | 2026-01-08 01:15:48.422844 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-08 01:15:48.422851 | orchestrator | Thursday 08 January 2026 01:07:39 +0000 (0:00:11.088) 0:01:13.019 ****** 2026-01-08 01:15:48.422869 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:48.422876 | orchestrator | 2026-01-08 01:15:48.422882 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-08 01:15:48.422889 | orchestrator | Thursday 08 January 2026 01:07:40 +0000 (0:00:01.325) 0:01:14.344 ****** 2026-01-08 01:15:48.422895 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.422901 | orchestrator | 2026-01-08 01:15:48.422908 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-08 01:15:48.422914 | orchestrator | Thursday 08 January 2026 01:07:41 +0000 (0:00:00.600) 0:01:14.945 ****** 2026-01-08 01:15:48.422921 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:48.422947 | orchestrator | 2026-01-08 01:15:48.422955 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-08 01:15:48.422961 | orchestrator | Thursday 08 January 2026 01:07:41 +0000 (0:00:00.506) 0:01:15.452 ****** 2026-01-08 01:15:48.422967 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:48.422973 | orchestrator | 2026-01-08 01:15:48.422979 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-08 01:15:48.422986 | orchestrator | Thursday 08 January 2026 01:07:59 +0000 (0:00:17.449) 0:01:32.902 ****** 2026-01-08 01:15:48.422992 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.422999 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423005 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423011 | orchestrator | 2026-01-08 01:15:48.423018 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-08 01:15:48.423024 | orchestrator | 2026-01-08 01:15:48.423031 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-08 01:15:48.423037 | orchestrator | Thursday 08 January 2026 01:07:59 +0000 (0:00:00.325) 0:01:33.227 ****** 2026-01-08 01:15:48.423043 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:48.423050 | orchestrator | 2026-01-08 01:15:48.423056 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-08 01:15:48.423062 | orchestrator | Thursday 08 January 2026 01:08:00 +0000 (0:00:00.622) 0:01:33.849 ****** 2026-01-08 01:15:48.423075 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423081 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423087 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.423093 | orchestrator | 2026-01-08 01:15:48.423099 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-08 01:15:48.423106 | orchestrator | Thursday 08 January 2026 01:08:02 +0000 (0:00:01.906) 0:01:35.755 ****** 2026-01-08 01:15:48.423112 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423119 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423125 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.423131 | orchestrator | 2026-01-08 01:15:48.423137 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-08 01:15:48.423143 | orchestrator | Thursday 08 January 2026 01:08:04 +0000 (0:00:02.135) 0:01:37.891 ****** 2026-01-08 01:15:48.423150 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.423156 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423163 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423169 | orchestrator | 2026-01-08 01:15:48.423175 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-08 01:15:48.423186 | orchestrator | Thursday 08 January 2026 01:08:04 +0000 (0:00:00.331) 0:01:38.223 ****** 2026-01-08 01:15:48.423191 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-08 01:15:48.423195 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423199 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-08 01:15:48.423203 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423207 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-08 01:15:48.423212 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-08 01:15:48.423218 | orchestrator | 2026-01-08 01:15:48.423224 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-08 01:15:48.423234 | orchestrator | Thursday 08 January 2026 01:08:15 +0000 (0:00:11.054) 0:01:49.278 ****** 2026-01-08 01:15:48.423241 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.423247 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423253 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423259 | orchestrator | 2026-01-08 01:15:48.423265 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-08 01:15:48.423270 | orchestrator | Thursday 08 January 2026 01:08:15 +0000 (0:00:00.286) 0:01:49.565 ****** 2026-01-08 01:15:48.423277 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-08 01:15:48.423283 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.423289 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-08 01:15:48.423295 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423301 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-08 01:15:48.423307 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423313 | orchestrator | 2026-01-08 01:15:48.423319 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-08 01:15:48.423326 | orchestrator | Thursday 08 January 2026 01:08:16 +0000 (0:00:00.643) 0:01:50.208 ****** 2026-01-08 01:15:48.423330 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423334 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423338 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.423341 | orchestrator | 2026-01-08 01:15:48.423345 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-08 01:15:48.423349 | orchestrator | Thursday 08 January 2026 01:08:17 +0000 (0:00:00.644) 0:01:50.852 ****** 2026-01-08 01:15:48.423353 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423357 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423360 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.423364 | orchestrator | 2026-01-08 01:15:48.423368 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-08 01:15:48.423377 | orchestrator | Thursday 08 January 2026 01:08:18 +0000 (0:00:00.910) 0:01:51.763 ****** 2026-01-08 01:15:48.423381 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423385 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423397 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.423401 | orchestrator | 2026-01-08 01:15:48.423405 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-08 01:15:48.423409 | orchestrator | Thursday 08 January 2026 01:08:20 +0000 (0:00:01.927) 0:01:53.690 ****** 2026-01-08 01:15:48.423412 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423416 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423420 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:48.423424 | orchestrator | 2026-01-08 01:15:48.423428 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-08 01:15:48.423432 | orchestrator | Thursday 08 January 2026 01:08:41 +0000 (0:00:21.583) 0:02:15.273 ****** 2026-01-08 01:15:48.423435 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423439 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423443 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:48.423447 | orchestrator | 2026-01-08 01:15:48.423454 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-08 01:15:48.423460 | orchestrator | Thursday 08 January 2026 01:08:53 +0000 (0:00:12.175) 0:02:27.449 ****** 2026-01-08 01:15:48.423466 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:15:48.423472 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423477 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423483 | orchestrator | 2026-01-08 01:15:48.423489 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-08 01:15:48.423498 | orchestrator | Thursday 08 January 2026 01:08:55 +0000 (0:00:01.267) 0:02:28.716 ****** 2026-01-08 01:15:48.423505 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423514 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423520 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.423526 | orchestrator | 2026-01-08 01:15:48.423533 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-08 01:15:48.423538 | orchestrator | Thursday 08 January 2026 01:09:07 +0000 (0:00:12.810) 0:02:41.527 ****** 2026-01-08 01:15:48.423545 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.423550 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423556 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423562 | orchestrator | 2026-01-08 01:15:48.423569 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-08 01:15:48.423575 | orchestrator | Thursday 08 January 2026 01:09:08 +0000 (0:00:01.034) 0:02:42.561 ****** 2026-01-08 01:15:48.423581 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.423587 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.423593 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.423599 | orchestrator | 2026-01-08 01:15:48.423605 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-08 01:15:48.423611 | orchestrator | 2026-01-08 01:15:48.423617 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-08 01:15:48.423623 | orchestrator | Thursday 08 January 2026 01:09:09 +0000 (0:00:00.524) 0:02:43.085 ****** 2026-01-08 01:15:48.423630 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:48.423638 | orchestrator | 2026-01-08 01:15:48.423644 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-01-08 01:15:48.423649 | orchestrator | Thursday 08 January 2026 01:09:09 +0000 (0:00:00.591) 0:02:43.677 ****** 2026-01-08 01:15:48.423661 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-08 01:15:48.423667 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-08 01:15:48.423673 | orchestrator | 2026-01-08 01:15:48.423679 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-01-08 01:15:48.423691 | orchestrator | Thursday 08 January 2026 01:09:13 +0000 (0:00:03.290) 0:02:46.967 ****** 2026-01-08 01:15:48.423697 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-08 01:15:48.423706 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-08 01:15:48.423712 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-08 01:15:48.423718 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-08 01:15:48.423723 | orchestrator | 2026-01-08 01:15:48.423729 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-08 01:15:48.423735 | orchestrator | Thursday 08 January 2026 01:09:19 +0000 (0:00:06.392) 0:02:53.360 ****** 2026-01-08 01:15:48.423741 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-08 01:15:48.423746 | orchestrator | 2026-01-08 01:15:48.423751 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-08 01:15:48.423757 | orchestrator | Thursday 08 January 2026 01:09:23 +0000 (0:00:03.350) 0:02:56.711 ****** 2026-01-08 01:15:48.423762 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-08 01:15:48.423768 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-08 01:15:48.423774 | orchestrator | 2026-01-08 01:15:48.423780 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-08 01:15:48.423785 | orchestrator | Thursday 08 January 2026 01:09:27 +0000 (0:00:04.220) 0:03:00.931 ****** 2026-01-08 01:15:48.423791 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-08 01:15:48.423797 | orchestrator | 2026-01-08 01:15:48.423803 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-01-08 01:15:48.423808 | orchestrator | Thursday 08 January 2026 01:09:30 +0000 (0:00:03.169) 0:03:04.100 ****** 2026-01-08 01:15:48.423815 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-08 01:15:48.423821 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-08 01:15:48.423826 | orchestrator | 2026-01-08 01:15:48.423832 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-08 01:15:48.423845 | orchestrator | Thursday 08 January 2026 01:09:36 +0000 (0:00:06.244) 0:03:10.345 ****** 2026-01-08 01:15:48.423858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.423869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.423892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.423918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.423926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.423992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424033 | orchestrator | 2026-01-08 01:15:48.424046 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-08 01:15:48.424053 | orchestrator | Thursday 08 January 2026 01:09:38 +0000 (0:00:01.665) 0:03:12.010 ****** 2026-01-08 01:15:48.424059 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.424066 | orchestrator | 2026-01-08 01:15:48.424073 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-08 01:15:48.424079 | orchestrator | Thursday 08 January 2026 01:09:38 +0000 (0:00:00.145) 0:03:12.156 ****** 2026-01-08 01:15:48.424084 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.424091 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.424097 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.424103 | orchestrator | 2026-01-08 01:15:48.424107 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-08 01:15:48.424111 | orchestrator | Thursday 08 January 2026 01:09:38 +0000 (0:00:00.316) 0:03:12.473 ****** 2026-01-08 01:15:48.424115 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-08 01:15:48.424119 | orchestrator | 2026-01-08 01:15:48.424123 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-08 01:15:48.424127 | orchestrator | Thursday 08 January 2026 01:09:39 +0000 (0:00:00.924) 0:03:13.398 ****** 2026-01-08 01:15:48.424131 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.424135 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.424143 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.424147 | orchestrator | 2026-01-08 01:15:48.424151 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-08 01:15:48.424155 | orchestrator | Thursday 08 January 2026 01:09:40 +0000 (0:00:00.329) 0:03:13.727 ****** 2026-01-08 01:15:48.424159 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:48.424163 | orchestrator | 2026-01-08 01:15:48.424167 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-08 01:15:48.424171 | orchestrator | Thursday 08 January 2026 01:09:40 +0000 (0:00:00.551) 0:03:14.279 ****** 2026-01-08 01:15:48.424178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424247 | orchestrator | 2026-01-08 01:15:48.424253 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-08 01:15:48.424259 | orchestrator | Thursday 08 January 2026 01:09:44 +0000 (0:00:03.424) 0:03:17.704 ****** 2026-01-08 01:15:48.424269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.424291 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.424303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.424332 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.424339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.424361 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.424365 | orchestrator | 2026-01-08 01:15:48.424369 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-08 01:15:48.424373 | orchestrator | Thursday 08 January 2026 01:09:44 +0000 (0:00:00.733) 0:03:18.437 ****** 2026-01-08 01:15:48.424381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.424402 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.424407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.424423 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.424427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.424448 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.424452 | orchestrator | 2026-01-08 01:15:48.424456 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-08 01:15:48.424460 | orchestrator | Thursday 08 January 2026 01:09:45 +0000 (0:00:00.964) 0:03:19.401 ****** 2026-01-08 01:15:48.424467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424528 | orchestrator | 2026-01-08 01:15:48.424532 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-08 01:15:48.424537 | orchestrator | Thursday 08 January 2026 01:09:49 +0000 (0:00:03.973) 0:03:23.375 ****** 2026-01-08 01:15:48.424545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.424603 | orchestrator | 2026-01-08 01:15:48.424610 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-08 01:15:48.424616 | orchestrator | Thursday 08 January 2026 01:09:57 +0000 (0:00:07.861) 0:03:31.236 ****** 2026-01-08 01:15:48.424625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.424654 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.424661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.424689 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.424699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.424708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.424712 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.424716 | orchestrator | 2026-01-08 01:15:48.424720 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-08 01:15:48.424724 | orchestrator | Thursday 08 January 2026 01:09:58 +0000 (0:00:00.720) 0:03:31.957 ****** 2026-01-08 01:15:48.424728 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.424732 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.424736 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.424740 | orchestrator | 2026-01-08 01:15:48.424744 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-01-08 01:15:48.424748 | orchestrator | Thursday 08 January 2026 01:09:58 +0000 (0:00:00.695) 0:03:32.653 ****** 2026-01-08 01:15:48.424752 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.424759 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.424763 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.424766 | orchestrator | 2026-01-08 01:15:48.424774 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-01-08 01:15:48.424778 | orchestrator | Thursday 08 January 2026 01:09:59 +0000 (0:00:00.953) 0:03:33.606 ****** 2026-01-08 01:15:48.424783 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-01-08 01:15:48.424787 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-08 01:15:48.424790 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.424794 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-01-08 01:15:48.424798 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-08 01:15:48.424802 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.424806 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-01-08 01:15:48.424810 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-08 01:15:48.424814 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.424818 | orchestrator | 2026-01-08 01:15:48.424822 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-01-08 01:15:48.424826 | orchestrator | Thursday 08 January 2026 01:10:00 +0000 (0:00:00.615) 0:03:34.222 ****** 2026-01-08 01:15:48.424830 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774'}) 2026-01-08 01:15:48.424834 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775'}) 2026-01-08 01:15:48.424838 | orchestrator | 2026-01-08 01:15:48.424842 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-01-08 01:15:48.424846 | orchestrator | Thursday 08 January 2026 01:10:01 +0000 (0:00:01.345) 0:03:35.568 ****** 2026-01-08 01:15:48.424850 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.424854 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.424858 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.424862 | orchestrator | 2026-01-08 01:15:48.424866 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-01-08 01:15:48.424870 | orchestrator | Thursday 08 January 2026 01:10:04 +0000 (0:00:02.429) 0:03:37.997 ****** 2026-01-08 01:15:48.424874 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.424878 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.424882 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.424886 | orchestrator | 2026-01-08 01:15:48.424890 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-01-08 01:15:48.424894 | orchestrator | Thursday 08 January 2026 01:10:06 +0000 (0:00:02.034) 0:03:40.032 ****** 2026-01-08 01:15:48.424901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.424925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.425007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.425018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-08 01:15:48.425027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.425034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.425040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.425047 | orchestrator | 2026-01-08 01:15:48.425053 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-01-08 01:15:48.425058 | orchestrator | Thursday 08 January 2026 01:10:08 +0000 (0:00:02.634) 0:03:42.666 ****** 2026-01-08 01:15:48.425536 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:15:48.425578 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:48.425585 | orchestrator | } 2026-01-08 01:15:48.425591 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:15:48.425597 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:48.425604 | orchestrator | } 2026-01-08 01:15:48.425609 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:15:48.425616 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:48.425621 | orchestrator | } 2026-01-08 01:15:48.425627 | orchestrator | 2026-01-08 01:15:48.425634 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:15:48.425640 | orchestrator | Thursday 08 January 2026 01:10:09 +0000 (0:00:00.574) 0:03:43.240 ****** 2026-01-08 01:15:48.425649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.425680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.425688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.425695 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.425710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.425718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.425729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.425734 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.425742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.425746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/nova-api:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-08 01:15:48.425756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.425763 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.425767 | orchestrator | 2026-01-08 01:15:48.425771 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-08 01:15:48.425775 | orchestrator | Thursday 08 January 2026 01:10:10 +0000 (0:00:00.883) 0:03:44.124 ****** 2026-01-08 01:15:48.425779 | orchestrator | 2026-01-08 01:15:48.425783 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-08 01:15:48.425787 | orchestrator | Thursday 08 January 2026 01:10:10 +0000 (0:00:00.138) 0:03:44.262 ****** 2026-01-08 01:15:48.425791 | orchestrator | 2026-01-08 01:15:48.425794 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-08 01:15:48.425798 | orchestrator | Thursday 08 January 2026 01:10:10 +0000 (0:00:00.129) 0:03:44.391 ****** 2026-01-08 01:15:48.425802 | orchestrator | 2026-01-08 01:15:48.425806 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-08 01:15:48.425810 | orchestrator | Thursday 08 January 2026 01:10:11 +0000 (0:00:00.308) 0:03:44.700 ****** 2026-01-08 01:15:48.425814 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.425818 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.425821 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.425825 | orchestrator | 2026-01-08 01:15:48.425829 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-08 01:15:48.425833 | orchestrator | Thursday 08 January 2026 01:10:24 +0000 (0:00:13.860) 0:03:58.561 ****** 2026-01-08 01:15:48.425837 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.425841 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.425844 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.425848 | orchestrator | 2026-01-08 01:15:48.425852 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-01-08 01:15:48.425856 | orchestrator | Thursday 08 January 2026 01:10:35 +0000 (0:00:10.752) 0:04:09.313 ****** 2026-01-08 01:15:48.425860 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.425863 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.425867 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.425872 | orchestrator | 2026-01-08 01:15:48.425875 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-08 01:15:48.425879 | orchestrator | 2026-01-08 01:15:48.425883 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-08 01:15:48.425887 | orchestrator | Thursday 08 January 2026 01:10:44 +0000 (0:00:09.333) 0:04:18.647 ****** 2026-01-08 01:15:48.425891 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:48.425897 | orchestrator | 2026-01-08 01:15:48.425900 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-08 01:15:48.425907 | orchestrator | Thursday 08 January 2026 01:10:46 +0000 (0:00:01.280) 0:04:19.927 ****** 2026-01-08 01:15:48.425911 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.425915 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.425919 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.425922 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.425926 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.425954 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.425961 | orchestrator | 2026-01-08 01:15:48.425967 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-01-08 01:15:48.425973 | orchestrator | Thursday 08 January 2026 01:10:47 +0000 (0:00:00.779) 0:04:20.706 ****** 2026-01-08 01:15:48.425977 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.425980 | orchestrator | 2026-01-08 01:15:48.425984 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-01-08 01:15:48.425992 | orchestrator | Thursday 08 January 2026 01:11:07 +0000 (0:00:20.765) 0:04:41.472 ****** 2026-01-08 01:15:48.425996 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:15:48.426000 | orchestrator | 2026-01-08 01:15:48.426004 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-01-08 01:15:48.426008 | orchestrator | Thursday 08 January 2026 01:11:09 +0000 (0:00:01.459) 0:04:42.932 ****** 2026-01-08 01:15:48.426042 | orchestrator | included: service-image-info for testbed-node-3 2026-01-08 01:15:48.426048 | orchestrator | 2026-01-08 01:15:48.426052 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-01-08 01:15:48.426056 | orchestrator | Thursday 08 January 2026 01:11:09 +0000 (0:00:00.728) 0:04:43.661 ****** 2026-01-08 01:15:48.426060 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:15:48.426064 | orchestrator | 2026-01-08 01:15:48.426068 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-01-08 01:15:48.426072 | orchestrator | Thursday 08 January 2026 01:11:13 +0000 (0:00:03.550) 0:04:47.211 ****** 2026-01-08 01:15:48.426076 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:15:48.426080 | orchestrator | 2026-01-08 01:15:48.426084 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-01-08 01:15:48.426087 | orchestrator | Thursday 08 January 2026 01:11:15 +0000 (0:00:02.074) 0:04:49.286 ****** 2026-01-08 01:15:48.426091 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.426095 | orchestrator | 2026-01-08 01:15:48.426099 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-01-08 01:15:48.426103 | orchestrator | Thursday 08 January 2026 01:11:17 +0000 (0:00:02.181) 0:04:51.468 ****** 2026-01-08 01:15:48.426107 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.426111 | orchestrator | 2026-01-08 01:15:48.426115 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-01-08 01:15:48.426124 | orchestrator | Thursday 08 January 2026 01:11:19 +0000 (0:00:01.887) 0:04:53.356 ****** 2026-01-08 01:15:48.426129 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-08 01:15:48.426133 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-08 01:15:48.426138 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-08 01:15:48.426142 | orchestrator | 2026-01-08 01:15:48.426147 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-01-08 01:15:48.426151 | orchestrator | Thursday 08 January 2026 01:11:29 +0000 (0:00:09.636) 0:05:02.993 ****** 2026-01-08 01:15:48.426155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-08 01:15:48.426160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-08 01:15:48.426164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-08 01:15:48.426169 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.426173 | orchestrator | 2026-01-08 01:15:48.426177 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-01-08 01:15:48.426182 | orchestrator | Thursday 08 January 2026 01:11:34 +0000 (0:00:05.302) 0:05:08.295 ****** 2026-01-08 01:15:48.426188 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-3', 'ansible_loop_var': 'item'})  2026-01-08 01:15:48.426194 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-4', 'ansible_loop_var': 'item'})  2026-01-08 01:15:48.426199 | orchestrator | skipping: [testbed-node-3] => (item={'result': False, 'changed': False, 'containers': {}, 'invocation': {'module_args': {'action': 'get_containers', 'container_engine': 'docker', 'name': ['nova_libvirt'], 'api_version': 'auto'}}, 'failed': False, 'item': 'testbed-node-5', 'ansible_loop_var': 'item'})  2026-01-08 01:15:48.426209 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.426214 | orchestrator | 2026-01-08 01:15:48.426218 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-08 01:15:48.426223 | orchestrator | Thursday 08 January 2026 01:11:38 +0000 (0:00:03.490) 0:05:11.786 ****** 2026-01-08 01:15:48.426227 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.426231 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.426235 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.426240 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 01:15:48.426244 | orchestrator | 2026-01-08 01:15:48.426249 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-08 01:15:48.426256 | orchestrator | Thursday 08 January 2026 01:11:39 +0000 (0:00:01.085) 0:05:12.871 ****** 2026-01-08 01:15:48.426260 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-08 01:15:48.426265 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-08 01:15:48.426269 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-08 01:15:48.426274 | orchestrator | 2026-01-08 01:15:48.426278 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-08 01:15:48.426282 | orchestrator | Thursday 08 January 2026 01:11:39 +0000 (0:00:00.675) 0:05:13.547 ****** 2026-01-08 01:15:48.426287 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-08 01:15:48.426291 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-08 01:15:48.426295 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-08 01:15:48.426300 | orchestrator | 2026-01-08 01:15:48.426304 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-08 01:15:48.426309 | orchestrator | Thursday 08 January 2026 01:11:40 +0000 (0:00:01.138) 0:05:14.685 ****** 2026-01-08 01:15:48.426314 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-08 01:15:48.426318 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.426323 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-08 01:15:48.426327 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.426331 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-08 01:15:48.426336 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.426340 | orchestrator | 2026-01-08 01:15:48.426344 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-08 01:15:48.426349 | orchestrator | Thursday 08 January 2026 01:11:41 +0000 (0:00:00.706) 0:05:15.392 ****** 2026-01-08 01:15:48.426353 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-08 01:15:48.426357 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-08 01:15:48.426362 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.426366 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-08 01:15:48.426371 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-08 01:15:48.426375 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-08 01:15:48.426380 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.426384 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-08 01:15:48.426398 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-08 01:15:48.426403 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-08 01:15:48.426407 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-08 01:15:48.426411 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.426416 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-08 01:15:48.426420 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-08 01:15:48.426428 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-08 01:15:48.426432 | orchestrator | 2026-01-08 01:15:48.426437 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-08 01:15:48.426441 | orchestrator | Thursday 08 January 2026 01:11:43 +0000 (0:00:02.092) 0:05:17.485 ****** 2026-01-08 01:15:48.426445 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.426450 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.426454 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.426458 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.426463 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.426467 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.426471 | orchestrator | 2026-01-08 01:15:48.426476 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-08 01:15:48.426480 | orchestrator | Thursday 08 January 2026 01:11:44 +0000 (0:00:01.168) 0:05:18.654 ****** 2026-01-08 01:15:48.426484 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.426488 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.426493 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.426497 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.426501 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.426506 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.426510 | orchestrator | 2026-01-08 01:15:48.426514 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-08 01:15:48.426519 | orchestrator | Thursday 08 January 2026 01:11:46 +0000 (0:00:01.920) 0:05:20.574 ****** 2026-01-08 01:15:48.426527 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426551 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426625 | orchestrator | 2026-01-08 01:15:48.426629 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-08 01:15:48.426633 | orchestrator | Thursday 08 January 2026 01:11:49 +0000 (0:00:02.632) 0:05:23.207 ****** 2026-01-08 01:15:48.426637 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:15:48.426641 | orchestrator | 2026-01-08 01:15:48.426645 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-08 01:15:48.426649 | orchestrator | Thursday 08 January 2026 01:11:50 +0000 (0:00:01.294) 0:05:24.502 ****** 2026-01-08 01:15:48.426656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426660 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426689 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426726 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426735 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.426739 | orchestrator | 2026-01-08 01:15:48.426743 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-08 01:15:48.426747 | orchestrator | Thursday 08 January 2026 01:11:53 +0000 (0:00:03.163) 0:05:27.665 ****** 2026-01-08 01:15:48.426754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.426762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.426770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.426779 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.426783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.426789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.426797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426801 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.426805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.426811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426815 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.426819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.426823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.426830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426837 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.426841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426845 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.426849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.426857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426861 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.426865 | orchestrator | 2026-01-08 01:15:48.426869 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-08 01:15:48.426873 | orchestrator | Thursday 08 January 2026 01:11:56 +0000 (0:00:02.244) 0:05:29.910 ****** 2026-01-08 01:15:48.426878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.426882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.426891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.426895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.426902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426907 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.426911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.426915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.426925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426949 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.426953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426957 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.426961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.426968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.426972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426976 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.426980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.426987 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.426995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.427000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.427004 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.427008 | orchestrator | 2026-01-08 01:15:48.427012 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-08 01:15:48.427016 | orchestrator | Thursday 08 January 2026 01:11:58 +0000 (0:00:02.296) 0:05:32.206 ****** 2026-01-08 01:15:48.427019 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.427023 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.427027 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.427031 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-08 01:15:48.427035 | orchestrator | 2026-01-08 01:15:48.427039 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-08 01:15:48.427043 | orchestrator | Thursday 08 January 2026 01:11:59 +0000 (0:00:00.885) 0:05:33.091 ****** 2026-01-08 01:15:48.427047 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-08 01:15:48.427051 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-08 01:15:48.427054 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-08 01:15:48.427058 | orchestrator | 2026-01-08 01:15:48.427062 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-08 01:15:48.427066 | orchestrator | Thursday 08 January 2026 01:12:00 +0000 (0:00:01.228) 0:05:34.320 ****** 2026-01-08 01:15:48.427070 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-08 01:15:48.427074 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-08 01:15:48.427077 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-08 01:15:48.427081 | orchestrator | 2026-01-08 01:15:48.427085 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-08 01:15:48.427092 | orchestrator | Thursday 08 January 2026 01:12:01 +0000 (0:00:00.963) 0:05:35.283 ****** 2026-01-08 01:15:48.427096 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:15:48.427100 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:15:48.427104 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:15:48.427108 | orchestrator | 2026-01-08 01:15:48.427111 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-08 01:15:48.427115 | orchestrator | Thursday 08 January 2026 01:12:02 +0000 (0:00:00.557) 0:05:35.840 ****** 2026-01-08 01:15:48.427119 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:15:48.427123 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:15:48.427127 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:15:48.427134 | orchestrator | 2026-01-08 01:15:48.427138 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-08 01:15:48.427142 | orchestrator | Thursday 08 January 2026 01:12:02 +0000 (0:00:00.509) 0:05:36.350 ****** 2026-01-08 01:15:48.427145 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-08 01:15:48.427149 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-08 01:15:48.427153 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-08 01:15:48.427157 | orchestrator | 2026-01-08 01:15:48.427161 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-08 01:15:48.427165 | orchestrator | Thursday 08 January 2026 01:12:04 +0000 (0:00:01.352) 0:05:37.702 ****** 2026-01-08 01:15:48.427169 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-08 01:15:48.427173 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-08 01:15:48.427177 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-08 01:15:48.427180 | orchestrator | 2026-01-08 01:15:48.427184 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-08 01:15:48.427188 | orchestrator | Thursday 08 January 2026 01:12:05 +0000 (0:00:01.175) 0:05:38.877 ****** 2026-01-08 01:15:48.427192 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-08 01:15:48.427196 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-08 01:15:48.427200 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-08 01:15:48.427204 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-08 01:15:48.427208 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-08 01:15:48.427212 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-08 01:15:48.427216 | orchestrator | 2026-01-08 01:15:48.427219 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-08 01:15:48.427223 | orchestrator | Thursday 08 January 2026 01:12:08 +0000 (0:00:03.561) 0:05:42.438 ****** 2026-01-08 01:15:48.427227 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.427231 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.427235 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.427239 | orchestrator | 2026-01-08 01:15:48.427243 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-08 01:15:48.427247 | orchestrator | Thursday 08 January 2026 01:12:09 +0000 (0:00:00.307) 0:05:42.746 ****** 2026-01-08 01:15:48.427250 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.427254 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.427258 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.427262 | orchestrator | 2026-01-08 01:15:48.427266 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-08 01:15:48.427272 | orchestrator | Thursday 08 January 2026 01:12:09 +0000 (0:00:00.506) 0:05:43.253 ****** 2026-01-08 01:15:48.427276 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.427280 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.427284 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.427287 | orchestrator | 2026-01-08 01:15:48.427291 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-08 01:15:48.427295 | orchestrator | Thursday 08 January 2026 01:12:10 +0000 (0:00:01.218) 0:05:44.471 ****** 2026-01-08 01:15:48.427299 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-01-08 01:15:48.427305 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-01-08 01:15:48.427309 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-01-08 01:15:48.427318 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-01-08 01:15:48.427323 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-01-08 01:15:48.427327 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-01-08 01:15:48.427331 | orchestrator | 2026-01-08 01:15:48.427335 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-08 01:15:48.427338 | orchestrator | Thursday 08 January 2026 01:12:13 +0000 (0:00:03.140) 0:05:47.612 ****** 2026-01-08 01:15:48.427343 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-08 01:15:48.427346 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-08 01:15:48.427353 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-08 01:15:48.427357 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-08 01:15:48.427361 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.427365 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-08 01:15:48.427368 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.427372 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-08 01:15:48.427376 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.427380 | orchestrator | 2026-01-08 01:15:48.427384 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-08 01:15:48.427388 | orchestrator | Thursday 08 January 2026 01:12:17 +0000 (0:00:03.204) 0:05:50.816 ****** 2026-01-08 01:15:48.427392 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.427395 | orchestrator | 2026-01-08 01:15:48.427399 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-08 01:15:48.427403 | orchestrator | Thursday 08 January 2026 01:12:17 +0000 (0:00:00.118) 0:05:50.934 ****** 2026-01-08 01:15:48.427407 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.427411 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.427415 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.427419 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.427423 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.427426 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.427430 | orchestrator | 2026-01-08 01:15:48.427434 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-08 01:15:48.427438 | orchestrator | Thursday 08 January 2026 01:12:18 +0000 (0:00:00.828) 0:05:51.762 ****** 2026-01-08 01:15:48.427442 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-08 01:15:48.427446 | orchestrator | 2026-01-08 01:15:48.427449 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-08 01:15:48.427453 | orchestrator | Thursday 08 January 2026 01:12:18 +0000 (0:00:00.728) 0:05:52.491 ****** 2026-01-08 01:15:48.427457 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.427461 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.427464 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.427468 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.427472 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.427476 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.427480 | orchestrator | 2026-01-08 01:15:48.427484 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-08 01:15:48.427488 | orchestrator | Thursday 08 January 2026 01:12:19 +0000 (0:00:00.617) 0:05:53.109 ****** 2026-01-08 01:15:48.427494 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427579 | orchestrator | 2026-01-08 01:15:48.427583 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-08 01:15:48.427587 | orchestrator | Thursday 08 January 2026 01:12:22 +0000 (0:00:03.545) 0:05:56.654 ****** 2026-01-08 01:15:48.427594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.427605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.427609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.427619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.427624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.427681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.427688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.427736 | orchestrator | 2026-01-08 01:15:48.427740 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-08 01:15:48.427744 | orchestrator | Thursday 08 January 2026 01:12:29 +0000 (0:00:06.889) 0:06:03.544 ****** 2026-01-08 01:15:48.427748 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.427752 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.427756 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.427764 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.427768 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.427772 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.427776 | orchestrator | 2026-01-08 01:15:48.427780 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-08 01:15:48.427784 | orchestrator | Thursday 08 January 2026 01:12:31 +0000 (0:00:01.822) 0:06:05.367 ****** 2026-01-08 01:15:48.427788 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-08 01:15:48.427791 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-08 01:15:48.427795 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-08 01:15:48.427799 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-08 01:15:48.427803 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-08 01:15:48.427807 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-08 01:15:48.427811 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-08 01:15:48.427816 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.427820 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-08 01:15:48.427824 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.427828 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-08 01:15:48.427831 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.427835 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-08 01:15:48.427839 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-08 01:15:48.427843 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-08 01:15:48.427847 | orchestrator | 2026-01-08 01:15:48.427851 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-08 01:15:48.427855 | orchestrator | Thursday 08 January 2026 01:12:35 +0000 (0:00:03.514) 0:06:08.881 ****** 2026-01-08 01:15:48.427859 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.427863 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.427867 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.427873 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.427880 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.427884 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.427888 | orchestrator | 2026-01-08 01:15:48.427892 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-08 01:15:48.427896 | orchestrator | Thursday 08 January 2026 01:12:36 +0000 (0:00:00.861) 0:06:09.742 ****** 2026-01-08 01:15:48.427899 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-08 01:15:48.427903 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-08 01:15:48.427907 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-08 01:15:48.427911 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-08 01:15:48.427915 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-08 01:15:48.427919 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-08 01:15:48.427923 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-08 01:15:48.427927 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-08 01:15:48.427947 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-08 01:15:48.427951 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.427955 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-08 01:15:48.427959 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-08 01:15:48.427963 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.427966 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-08 01:15:48.427970 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.427974 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-08 01:15:48.427978 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-08 01:15:48.427983 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-08 01:15:48.427990 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-08 01:15:48.427994 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-08 01:15:48.427998 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-08 01:15:48.428002 | orchestrator | 2026-01-08 01:15:48.428005 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-08 01:15:48.428009 | orchestrator | Thursday 08 January 2026 01:12:41 +0000 (0:00:05.667) 0:06:15.410 ****** 2026-01-08 01:15:48.428013 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-08 01:15:48.428017 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-08 01:15:48.428021 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-08 01:15:48.428024 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-08 01:15:48.428028 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-08 01:15:48.428035 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-08 01:15:48.428039 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-08 01:15:48.428043 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-08 01:15:48.428047 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-08 01:15:48.428051 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-08 01:15:48.428055 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-08 01:15:48.428059 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-08 01:15:48.428063 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-08 01:15:48.428066 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.428070 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-08 01:15:48.428074 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.428081 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-08 01:15:48.428085 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-08 01:15:48.428089 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-08 01:15:48.428093 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.428097 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-08 01:15:48.428101 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-08 01:15:48.428104 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-08 01:15:48.428108 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-08 01:15:48.428112 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-08 01:15:48.428116 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-08 01:15:48.428120 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-08 01:15:48.428123 | orchestrator | 2026-01-08 01:15:48.428127 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-08 01:15:48.428131 | orchestrator | Thursday 08 January 2026 01:12:48 +0000 (0:00:06.612) 0:06:22.023 ****** 2026-01-08 01:15:48.428135 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.428139 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.428143 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.428147 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.428151 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.428154 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.428158 | orchestrator | 2026-01-08 01:15:48.428162 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-08 01:15:48.428166 | orchestrator | Thursday 08 January 2026 01:12:48 +0000 (0:00:00.601) 0:06:22.625 ****** 2026-01-08 01:15:48.428170 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.428173 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.428177 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.428181 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.428185 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.428189 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.428193 | orchestrator | 2026-01-08 01:15:48.428196 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-08 01:15:48.428200 | orchestrator | Thursday 08 January 2026 01:12:49 +0000 (0:00:00.874) 0:06:23.499 ****** 2026-01-08 01:15:48.428208 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.428212 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.428216 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.428220 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.428223 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.428227 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.428231 | orchestrator | 2026-01-08 01:15:48.428235 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-08 01:15:48.428239 | orchestrator | Thursday 08 January 2026 01:12:51 +0000 (0:00:01.869) 0:06:25.369 ****** 2026-01-08 01:15:48.428246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.428250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.428257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.428262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.428266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428281 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.428284 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.428289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.428295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.428300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428304 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.428307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.428315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428319 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.428326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.428330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428334 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.428338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.428345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428349 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.428353 | orchestrator | 2026-01-08 01:15:48.428356 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-08 01:15:48.428360 | orchestrator | Thursday 08 January 2026 01:12:53 +0000 (0:00:01.586) 0:06:26.955 ****** 2026-01-08 01:15:48.428364 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-08 01:15:48.428368 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-08 01:15:48.428376 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.428380 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-08 01:15:48.428384 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-08 01:15:48.428387 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.428391 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-08 01:15:48.428395 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-08 01:15:48.428399 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.428403 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-08 01:15:48.428407 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-08 01:15:48.428411 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.428415 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-08 01:15:48.428419 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-08 01:15:48.428423 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.428427 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-08 01:15:48.428430 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-08 01:15:48.428434 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.428438 | orchestrator | 2026-01-08 01:15:48.428442 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-01-08 01:15:48.428446 | orchestrator | Thursday 08 January 2026 01:12:53 +0000 (0:00:00.645) 0:06:27.600 ****** 2026-01-08 01:15:48.428453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-08 01:15:48.428541 | orchestrator | 2026-01-08 01:15:48.428545 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-01-08 01:15:48.428551 | orchestrator | Thursday 08 January 2026 01:12:56 +0000 (0:00:02.884) 0:06:30.485 ****** 2026-01-08 01:15:48.428555 | orchestrator | changed: [testbed-node-3] => { 2026-01-08 01:15:48.428559 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:48.428563 | orchestrator | } 2026-01-08 01:15:48.428567 | orchestrator | changed: [testbed-node-4] => { 2026-01-08 01:15:48.428571 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:48.428575 | orchestrator | } 2026-01-08 01:15:48.428579 | orchestrator | changed: [testbed-node-5] => { 2026-01-08 01:15:48.428583 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:48.428587 | orchestrator | } 2026-01-08 01:15:48.428590 | orchestrator | changed: [testbed-node-0] => { 2026-01-08 01:15:48.428594 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:48.428598 | orchestrator | } 2026-01-08 01:15:48.428602 | orchestrator | changed: [testbed-node-1] => { 2026-01-08 01:15:48.428606 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:48.428610 | orchestrator | } 2026-01-08 01:15:48.428614 | orchestrator | changed: [testbed-node-2] => { 2026-01-08 01:15:48.428617 | orchestrator |  "msg": "Notifying handlers" 2026-01-08 01:15:48.428621 | orchestrator | } 2026-01-08 01:15:48.428625 | orchestrator | 2026-01-08 01:15:48.428629 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-08 01:15:48.428632 | orchestrator | Thursday 08 January 2026 01:12:57 +0000 (0:00:00.693) 0:06:31.179 ****** 2026-01-08 01:15:48.428636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.428641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.428648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428653 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.428657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.428667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.428671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428675 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.428679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-08 01:15:48.428686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-08 01:15:48.428690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428698 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.428702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.428709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428713 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.428716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.428721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428724 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.428731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2025.1', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-08 01:15:48.428735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2025.1', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-08 01:15:48.428743 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.428747 | orchestrator | 2026-01-08 01:15:48.428751 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-08 01:15:48.428755 | orchestrator | Thursday 08 January 2026 01:12:59 +0000 (0:00:02.061) 0:06:33.241 ****** 2026-01-08 01:15:48.428759 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.428762 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.428766 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.428770 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.428774 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.428777 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.428781 | orchestrator | 2026-01-08 01:15:48.428785 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-08 01:15:48.428789 | orchestrator | Thursday 08 January 2026 01:13:00 +0000 (0:00:00.820) 0:06:34.061 ****** 2026-01-08 01:15:48.428793 | orchestrator | 2026-01-08 01:15:48.428797 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-08 01:15:48.428801 | orchestrator | Thursday 08 January 2026 01:13:00 +0000 (0:00:00.153) 0:06:34.215 ****** 2026-01-08 01:15:48.428805 | orchestrator | 2026-01-08 01:15:48.428809 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-08 01:15:48.428813 | orchestrator | Thursday 08 January 2026 01:13:00 +0000 (0:00:00.152) 0:06:34.367 ****** 2026-01-08 01:15:48.428816 | orchestrator | 2026-01-08 01:15:48.428822 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-08 01:15:48.428826 | orchestrator | Thursday 08 January 2026 01:13:00 +0000 (0:00:00.131) 0:06:34.499 ****** 2026-01-08 01:15:48.428830 | orchestrator | 2026-01-08 01:15:48.428834 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-08 01:15:48.428838 | orchestrator | Thursday 08 January 2026 01:13:00 +0000 (0:00:00.137) 0:06:34.636 ****** 2026-01-08 01:15:48.428842 | orchestrator | 2026-01-08 01:15:48.428845 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-08 01:15:48.428849 | orchestrator | Thursday 08 January 2026 01:13:01 +0000 (0:00:00.309) 0:06:34.946 ****** 2026-01-08 01:15:48.428853 | orchestrator | 2026-01-08 01:15:48.428857 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-08 01:15:48.428861 | orchestrator | Thursday 08 January 2026 01:13:01 +0000 (0:00:00.133) 0:06:35.079 ****** 2026-01-08 01:15:48.428865 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.428869 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.428873 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.428876 | orchestrator | 2026-01-08 01:15:48.428880 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-08 01:15:48.428884 | orchestrator | Thursday 08 January 2026 01:13:12 +0000 (0:00:11.505) 0:06:46.585 ****** 2026-01-08 01:15:48.428888 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.428892 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.428896 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.428899 | orchestrator | 2026-01-08 01:15:48.428903 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-08 01:15:48.428907 | orchestrator | Thursday 08 January 2026 01:13:24 +0000 (0:00:11.861) 0:06:58.446 ****** 2026-01-08 01:15:48.428911 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.428915 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.428919 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.428923 | orchestrator | 2026-01-08 01:15:48.428927 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-08 01:15:48.428965 | orchestrator | Thursday 08 January 2026 01:13:41 +0000 (0:00:16.832) 0:07:15.279 ****** 2026-01-08 01:15:48.428969 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.428973 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.428977 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.428981 | orchestrator | 2026-01-08 01:15:48.428985 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-08 01:15:48.428989 | orchestrator | Thursday 08 January 2026 01:14:08 +0000 (0:00:27.323) 0:07:42.602 ****** 2026-01-08 01:15:48.428992 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-01-08 01:15:48.428996 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.429000 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-01-08 01:15:48.429004 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.429008 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.429012 | orchestrator | 2026-01-08 01:15:48.429016 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-08 01:15:48.429020 | orchestrator | Thursday 08 January 2026 01:14:15 +0000 (0:00:06.362) 0:07:48.965 ****** 2026-01-08 01:15:48.429024 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.429028 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.429031 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.429035 | orchestrator | 2026-01-08 01:15:48.429042 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-08 01:15:48.429046 | orchestrator | Thursday 08 January 2026 01:14:16 +0000 (0:00:00.907) 0:07:49.873 ****** 2026-01-08 01:15:48.429050 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:15:48.429054 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:15:48.429058 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:15:48.429062 | orchestrator | 2026-01-08 01:15:48.429066 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-08 01:15:48.429070 | orchestrator | Thursday 08 January 2026 01:14:35 +0000 (0:00:19.429) 0:08:09.302 ****** 2026-01-08 01:15:48.429074 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.429077 | orchestrator | 2026-01-08 01:15:48.429081 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-08 01:15:48.429085 | orchestrator | Thursday 08 January 2026 01:14:35 +0000 (0:00:00.303) 0:08:09.606 ****** 2026-01-08 01:15:48.429089 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.429093 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.429097 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.429101 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.429104 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.429108 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-08 01:15:48.429112 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-08 01:15:48.429116 | orchestrator | 2026-01-08 01:15:48.429120 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-08 01:15:48.429124 | orchestrator | Thursday 08 January 2026 01:14:57 +0000 (0:00:21.454) 0:08:31.060 ****** 2026-01-08 01:15:48.429128 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.429132 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.429136 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.429139 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.429143 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.429147 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.429151 | orchestrator | 2026-01-08 01:15:48.429155 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-08 01:15:48.429159 | orchestrator | Thursday 08 January 2026 01:15:06 +0000 (0:00:09.063) 0:08:40.124 ****** 2026-01-08 01:15:48.429162 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.429169 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.429173 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.429177 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.429181 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.429188 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-01-08 01:15:48.429192 | orchestrator | 2026-01-08 01:15:48.429196 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-08 01:15:48.429200 | orchestrator | Thursday 08 January 2026 01:15:11 +0000 (0:00:04.890) 0:08:45.015 ****** 2026-01-08 01:15:48.429204 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-08 01:15:48.429208 | orchestrator | 2026-01-08 01:15:48.429211 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-08 01:15:48.429215 | orchestrator | Thursday 08 January 2026 01:15:25 +0000 (0:00:13.855) 0:08:58.870 ****** 2026-01-08 01:15:48.429219 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-08 01:15:48.429223 | orchestrator | 2026-01-08 01:15:48.429227 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-08 01:15:48.429231 | orchestrator | Thursday 08 January 2026 01:15:26 +0000 (0:00:01.351) 0:09:00.222 ****** 2026-01-08 01:15:48.429234 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.429238 | orchestrator | 2026-01-08 01:15:48.429242 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-08 01:15:48.429246 | orchestrator | Thursday 08 January 2026 01:15:28 +0000 (0:00:01.571) 0:09:01.793 ****** 2026-01-08 01:15:48.429250 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-08 01:15:48.429254 | orchestrator | 2026-01-08 01:15:48.429258 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-08 01:15:48.429261 | orchestrator | 2026-01-08 01:15:48.429265 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-08 01:15:48.429269 | orchestrator | Thursday 08 January 2026 01:15:39 +0000 (0:00:11.543) 0:09:13.336 ****** 2026-01-08 01:15:48.429273 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:15:48.429277 | orchestrator | changed: [testbed-node-1] 2026-01-08 01:15:48.429281 | orchestrator | changed: [testbed-node-2] 2026-01-08 01:15:48.429285 | orchestrator | 2026-01-08 01:15:48.429288 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-08 01:15:48.429292 | orchestrator | 2026-01-08 01:15:48.429296 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-08 01:15:48.429300 | orchestrator | Thursday 08 January 2026 01:15:40 +0000 (0:00:00.918) 0:09:14.255 ****** 2026-01-08 01:15:48.429304 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.429308 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.429312 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.429316 | orchestrator | 2026-01-08 01:15:48.429319 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-08 01:15:48.429323 | orchestrator | 2026-01-08 01:15:48.429327 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-08 01:15:48.429331 | orchestrator | Thursday 08 January 2026 01:15:41 +0000 (0:00:00.725) 0:09:14.981 ****** 2026-01-08 01:15:48.429335 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-08 01:15:48.429339 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-08 01:15:48.429343 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-08 01:15:48.429347 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-08 01:15:48.429351 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-08 01:15:48.429357 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-08 01:15:48.429362 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:15:48.429365 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-08 01:15:48.429369 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-08 01:15:48.429377 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-08 01:15:48.429381 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-08 01:15:48.429385 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-08 01:15:48.429389 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-08 01:15:48.429393 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:15:48.429397 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-08 01:15:48.429401 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-08 01:15:48.429405 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-08 01:15:48.429409 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-08 01:15:48.429412 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-08 01:15:48.429416 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-08 01:15:48.429420 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:15:48.429424 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-08 01:15:48.429428 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-08 01:15:48.429432 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-08 01:15:48.429435 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-08 01:15:48.429439 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-08 01:15:48.429443 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-08 01:15:48.429447 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.429451 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-08 01:15:48.429455 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-08 01:15:48.429458 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-08 01:15:48.429462 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-08 01:15:48.429466 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-08 01:15:48.429470 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-08 01:15:48.429474 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.429480 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-08 01:15:48.429484 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-08 01:15:48.429489 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-08 01:15:48.429493 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-08 01:15:48.429497 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-08 01:15:48.429510 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-08 01:15:48.429516 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.429523 | orchestrator | 2026-01-08 01:15:48.429528 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-08 01:15:48.429536 | orchestrator | 2026-01-08 01:15:48.429542 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-08 01:15:48.429547 | orchestrator | Thursday 08 January 2026 01:15:42 +0000 (0:00:01.301) 0:09:16.282 ****** 2026-01-08 01:15:48.429552 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-08 01:15:48.429558 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-08 01:15:48.429564 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.429570 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-08 01:15:48.429575 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-08 01:15:48.429581 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.429586 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-08 01:15:48.429592 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-08 01:15:48.429605 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.429610 | orchestrator | 2026-01-08 01:15:48.429615 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-08 01:15:48.429621 | orchestrator | 2026-01-08 01:15:48.429626 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-08 01:15:48.429632 | orchestrator | Thursday 08 January 2026 01:15:43 +0000 (0:00:00.551) 0:09:16.834 ****** 2026-01-08 01:15:48.429637 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.429643 | orchestrator | 2026-01-08 01:15:48.429649 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-08 01:15:48.429655 | orchestrator | 2026-01-08 01:15:48.429661 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-08 01:15:48.429667 | orchestrator | Thursday 08 January 2026 01:15:44 +0000 (0:00:01.306) 0:09:18.140 ****** 2026-01-08 01:15:48.429672 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:15:48.429676 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:15:48.429680 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:15:48.429684 | orchestrator | 2026-01-08 01:15:48.429688 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:15:48.429692 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:15:48.429697 | orchestrator | testbed-node-0 : ok=59  changed=39  unreachable=0 failed=0 skipped=48  rescued=0 ignored=0 2026-01-08 01:15:48.429706 | orchestrator | testbed-node-1 : ok=32  changed=23  unreachable=0 failed=0 skipped=55  rescued=0 ignored=0 2026-01-08 01:15:48.429710 | orchestrator | testbed-node-2 : ok=32  changed=23  unreachable=0 failed=0 skipped=55  rescued=0 ignored=0 2026-01-08 01:15:48.429713 | orchestrator | testbed-node-3 : ok=44  changed=29  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-01-08 01:15:48.429717 | orchestrator | testbed-node-4 : ok=42  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-08 01:15:48.429721 | orchestrator | testbed-node-5 : ok=37  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-08 01:15:48.429725 | orchestrator | 2026-01-08 01:15:48.429729 | orchestrator | 2026-01-08 01:15:48.429733 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:15:48.429737 | orchestrator | Thursday 08 January 2026 01:15:44 +0000 (0:00:00.437) 0:09:18.577 ****** 2026-01-08 01:15:48.429741 | orchestrator | =============================================================================== 2026-01-08 01:15:48.429745 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 28.56s 2026-01-08 01:15:48.429748 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.32s 2026-01-08 01:15:48.429752 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.58s 2026-01-08 01:15:48.429756 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.45s 2026-01-08 01:15:48.429760 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 20.77s 2026-01-08 01:15:48.429764 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 19.43s 2026-01-08 01:15:48.429768 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.45s 2026-01-08 01:15:48.429771 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 16.83s 2026-01-08 01:15:48.429775 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.29s 2026-01-08 01:15:48.429779 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 13.86s 2026-01-08 01:15:48.429792 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.86s 2026-01-08 01:15:48.429796 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.81s 2026-01-08 01:15:48.429800 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.18s 2026-01-08 01:15:48.429804 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.86s 2026-01-08 01:15:48.429808 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.54s 2026-01-08 01:15:48.429812 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.51s 2026-01-08 01:15:48.429816 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.09s 2026-01-08 01:15:48.429820 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 11.05s 2026-01-08 01:15:48.429824 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.75s 2026-01-08 01:15:48.429827 | orchestrator | nova-cell : Get container facts ----------------------------------------- 9.64s 2026-01-08 01:15:48.429831 | orchestrator | 2026-01-08 01:15:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:15:51.459793 | orchestrator | 2026-01-08 01:15:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:15:54.500876 | orchestrator | 2026-01-08 01:15:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:15:57.543715 | orchestrator | 2026-01-08 01:15:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:00.588605 | orchestrator | 2026-01-08 01:16:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:03.630612 | orchestrator | 2026-01-08 01:16:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:06.670992 | orchestrator | 2026-01-08 01:16:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:09.719683 | orchestrator | 2026-01-08 01:16:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:12.768897 | orchestrator | 2026-01-08 01:16:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:15.817673 | orchestrator | 2026-01-08 01:16:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:18.858593 | orchestrator | 2026-01-08 01:16:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:21.898306 | orchestrator | 2026-01-08 01:16:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:24.943760 | orchestrator | 2026-01-08 01:16:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:27.993280 | orchestrator | 2026-01-08 01:16:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:31.045524 | orchestrator | 2026-01-08 01:16:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:34.092588 | orchestrator | 2026-01-08 01:16:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:37.137540 | orchestrator | 2026-01-08 01:16:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:40.172014 | orchestrator | 2026-01-08 01:16:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:43.223848 | orchestrator | 2026-01-08 01:16:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:46.261936 | orchestrator | 2026-01-08 01:16:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-08 01:16:49.304869 | orchestrator | 2026-01-08 01:16:49.786481 | orchestrator | 2026-01-08 01:16:49.793791 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Jan 8 01:16:49 UTC 2026 2026-01-08 01:16:49.793857 | orchestrator | 2026-01-08 01:16:50.226426 | orchestrator | ok: Runtime: 0:35:44.581756 2026-01-08 01:16:50.525578 | 2026-01-08 01:16:50.525941 | TASK [Bootstrap services] 2026-01-08 01:16:51.350399 | orchestrator | 2026-01-08 01:16:51.350532 | orchestrator | # BOOTSTRAP 2026-01-08 01:16:51.350542 | orchestrator | 2026-01-08 01:16:51.350547 | orchestrator | + set -e 2026-01-08 01:16:51.350552 | orchestrator | + echo 2026-01-08 01:16:51.350558 | orchestrator | + echo '# BOOTSTRAP' 2026-01-08 01:16:51.350566 | orchestrator | + echo 2026-01-08 01:16:51.350587 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-08 01:16:51.360151 | orchestrator | + set -e 2026-01-08 01:16:51.360243 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-08 01:16:56.213408 | orchestrator | 2026-01-08 01:16:56 | INFO  | It takes a moment until task 81e57540-b99e-4b34-8ddf-6b3b39849a88 (flavor-manager) has been started and output is visible here. 2026-01-08 01:17:03.703114 | orchestrator | 2026-01-08 01:16:58 | INFO  | Flavor SCS-1L-1 created 2026-01-08 01:17:03.703203 | orchestrator | 2026-01-08 01:16:59 | INFO  | Flavor SCS-1L-1-5 created 2026-01-08 01:17:03.703210 | orchestrator | 2026-01-08 01:16:59 | INFO  | Flavor SCS-1V-2 created 2026-01-08 01:17:03.703215 | orchestrator | 2026-01-08 01:16:59 | INFO  | Flavor SCS-1V-2-5 created 2026-01-08 01:17:03.703219 | orchestrator | 2026-01-08 01:16:59 | INFO  | Flavor SCS-1V-4 created 2026-01-08 01:17:03.703223 | orchestrator | 2026-01-08 01:16:59 | INFO  | Flavor SCS-1V-4-10 created 2026-01-08 01:17:03.703228 | orchestrator | 2026-01-08 01:17:00 | INFO  | Flavor SCS-1V-8 created 2026-01-08 01:17:03.703232 | orchestrator | 2026-01-08 01:17:00 | INFO  | Flavor SCS-1V-8-20 created 2026-01-08 01:17:03.703244 | orchestrator | 2026-01-08 01:17:00 | INFO  | Flavor SCS-2V-4 created 2026-01-08 01:17:03.703248 | orchestrator | 2026-01-08 01:17:00 | INFO  | Flavor SCS-2V-4-10 created 2026-01-08 01:17:03.703258 | orchestrator | 2026-01-08 01:17:00 | INFO  | Flavor SCS-2V-8 created 2026-01-08 01:17:03.703263 | orchestrator | 2026-01-08 01:17:00 | INFO  | Flavor SCS-2V-8-20 created 2026-01-08 01:17:03.703267 | orchestrator | 2026-01-08 01:17:01 | INFO  | Flavor SCS-2V-16 created 2026-01-08 01:17:03.703271 | orchestrator | 2026-01-08 01:17:01 | INFO  | Flavor SCS-2V-16-50 created 2026-01-08 01:17:03.703275 | orchestrator | 2026-01-08 01:17:01 | INFO  | Flavor SCS-4V-8 created 2026-01-08 01:17:03.703279 | orchestrator | 2026-01-08 01:17:01 | INFO  | Flavor SCS-4V-8-20 created 2026-01-08 01:17:03.703283 | orchestrator | 2026-01-08 01:17:01 | INFO  | Flavor SCS-4V-16 created 2026-01-08 01:17:03.703286 | orchestrator | 2026-01-08 01:17:01 | INFO  | Flavor SCS-4V-16-50 created 2026-01-08 01:17:03.703291 | orchestrator | 2026-01-08 01:17:02 | INFO  | Flavor SCS-4V-32 created 2026-01-08 01:17:03.703295 | orchestrator | 2026-01-08 01:17:02 | INFO  | Flavor SCS-4V-32-100 created 2026-01-08 01:17:03.703300 | orchestrator | 2026-01-08 01:17:02 | INFO  | Flavor SCS-8V-16 created 2026-01-08 01:17:03.703304 | orchestrator | 2026-01-08 01:17:02 | INFO  | Flavor SCS-8V-16-50 created 2026-01-08 01:17:03.703308 | orchestrator | 2026-01-08 01:17:02 | INFO  | Flavor SCS-8V-32 created 2026-01-08 01:17:03.703312 | orchestrator | 2026-01-08 01:17:02 | INFO  | Flavor SCS-8V-32-100 created 2026-01-08 01:17:03.703316 | orchestrator | 2026-01-08 01:17:02 | INFO  | Flavor SCS-16V-32 created 2026-01-08 01:17:03.703320 | orchestrator | 2026-01-08 01:17:03 | INFO  | Flavor SCS-16V-32-100 created 2026-01-08 01:17:03.703324 | orchestrator | 2026-01-08 01:17:03 | INFO  | Flavor SCS-2V-4-20s created 2026-01-08 01:17:03.703328 | orchestrator | 2026-01-08 01:17:03 | INFO  | Flavor SCS-4V-8-50s created 2026-01-08 01:17:03.703332 | orchestrator | 2026-01-08 01:17:03 | INFO  | Flavor SCS-8V-32-100s created 2026-01-08 01:17:06.051053 | orchestrator | 2026-01-08 01:17:06 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-08 01:17:16.166803 | orchestrator | 2026-01-08 01:17:16 | INFO  | Task f6e72a9d-4bfa-4fa6-bd19-81ddc1e03487 (bootstrap-basic) was prepared for execution. 2026-01-08 01:17:16.166851 | orchestrator | 2026-01-08 01:17:16 | INFO  | It takes a moment until task f6e72a9d-4bfa-4fa6-bd19-81ddc1e03487 (bootstrap-basic) has been started and output is visible here. 2026-01-08 01:18:03.954230 | orchestrator | 2026-01-08 01:18:03.954313 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-08 01:18:03.954320 | orchestrator | 2026-01-08 01:18:03.954325 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-08 01:18:03.954330 | orchestrator | Thursday 08 January 2026 01:17:20 +0000 (0:00:00.077) 0:00:00.077 ****** 2026-01-08 01:18:03.954334 | orchestrator | ok: [localhost] 2026-01-08 01:18:03.954339 | orchestrator | 2026-01-08 01:18:03.954343 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-08 01:18:03.954347 | orchestrator | Thursday 08 January 2026 01:17:22 +0000 (0:00:01.986) 0:00:02.064 ****** 2026-01-08 01:18:03.954351 | orchestrator | ok: [localhost] 2026-01-08 01:18:03.954355 | orchestrator | 2026-01-08 01:18:03.954359 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-08 01:18:03.954363 | orchestrator | Thursday 08 January 2026 01:17:31 +0000 (0:00:09.040) 0:00:11.104 ****** 2026-01-08 01:18:03.954368 | orchestrator | changed: [localhost] 2026-01-08 01:18:03.954372 | orchestrator | 2026-01-08 01:18:03.954376 | orchestrator | TASK [Create public network] *************************************************** 2026-01-08 01:18:03.954380 | orchestrator | Thursday 08 January 2026 01:17:39 +0000 (0:00:08.205) 0:00:19.310 ****** 2026-01-08 01:18:03.954384 | orchestrator | changed: [localhost] 2026-01-08 01:18:03.954388 | orchestrator | 2026-01-08 01:18:03.954392 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-08 01:18:03.954396 | orchestrator | Thursday 08 January 2026 01:17:45 +0000 (0:00:05.356) 0:00:24.666 ****** 2026-01-08 01:18:03.954403 | orchestrator | changed: [localhost] 2026-01-08 01:18:03.954407 | orchestrator | 2026-01-08 01:18:03.954411 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-08 01:18:03.954415 | orchestrator | Thursday 08 January 2026 01:17:51 +0000 (0:00:06.576) 0:00:31.243 ****** 2026-01-08 01:18:03.954419 | orchestrator | changed: [localhost] 2026-01-08 01:18:03.954423 | orchestrator | 2026-01-08 01:18:03.954427 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-08 01:18:03.954431 | orchestrator | Thursday 08 January 2026 01:17:56 +0000 (0:00:04.339) 0:00:35.582 ****** 2026-01-08 01:18:03.954435 | orchestrator | changed: [localhost] 2026-01-08 01:18:03.954438 | orchestrator | 2026-01-08 01:18:03.954443 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-08 01:18:03.954451 | orchestrator | Thursday 08 January 2026 01:18:00 +0000 (0:00:03.889) 0:00:39.471 ****** 2026-01-08 01:18:03.954455 | orchestrator | ok: [localhost] 2026-01-08 01:18:03.954459 | orchestrator | 2026-01-08 01:18:03.954463 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:18:03.954467 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:18:03.954473 | orchestrator | 2026-01-08 01:18:03.954479 | orchestrator | 2026-01-08 01:18:03.954485 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:18:03.954491 | orchestrator | Thursday 08 January 2026 01:18:03 +0000 (0:00:03.643) 0:00:43.115 ****** 2026-01-08 01:18:03.954496 | orchestrator | =============================================================================== 2026-01-08 01:18:03.954502 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.04s 2026-01-08 01:18:03.954508 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.21s 2026-01-08 01:18:03.954514 | orchestrator | Set public network to default ------------------------------------------- 6.58s 2026-01-08 01:18:03.954520 | orchestrator | Create public network --------------------------------------------------- 5.36s 2026-01-08 01:18:03.954543 | orchestrator | Create public subnet ---------------------------------------------------- 4.34s 2026-01-08 01:18:03.954549 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.89s 2026-01-08 01:18:03.954556 | orchestrator | Create manager role ----------------------------------------------------- 3.64s 2026-01-08 01:18:03.954562 | orchestrator | Gathering Facts --------------------------------------------------------- 1.99s 2026-01-08 01:18:06.452705 | orchestrator | 2026-01-08 01:18:06 | INFO  | It takes a moment until task 9ef5de02-3213-4298-9b26-7a73982150f9 (image-manager) has been started and output is visible here. 2026-01-08 01:18:46.479106 | orchestrator | 2026-01-08 01:18:09 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-08 01:18:46.479163 | orchestrator | 2026-01-08 01:18:09 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-08 01:18:46.479171 | orchestrator | 2026-01-08 01:18:09 | INFO  | Importing image Cirros 0.6.2 2026-01-08 01:18:46.479177 | orchestrator | 2026-01-08 01:18:09 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-08 01:18:46.479183 | orchestrator | 2026-01-08 01:18:12 | INFO  | Waiting for image to leave queued state... 2026-01-08 01:18:46.479190 | orchestrator | 2026-01-08 01:18:14 | INFO  | Waiting for import to complete... 2026-01-08 01:18:46.479195 | orchestrator | 2026-01-08 01:18:24 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-08 01:18:46.479201 | orchestrator | 2026-01-08 01:18:25 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-08 01:18:46.479206 | orchestrator | 2026-01-08 01:18:25 | INFO  | Setting internal_version = 0.6.2 2026-01-08 01:18:46.479211 | orchestrator | 2026-01-08 01:18:25 | INFO  | Setting image_original_user = cirros 2026-01-08 01:18:46.479217 | orchestrator | 2026-01-08 01:18:25 | INFO  | Adding tag os:cirros 2026-01-08 01:18:46.479223 | orchestrator | 2026-01-08 01:18:25 | INFO  | Setting property architecture: x86_64 2026-01-08 01:18:46.479228 | orchestrator | 2026-01-08 01:18:25 | INFO  | Setting property hw_disk_bus: scsi 2026-01-08 01:18:46.479234 | orchestrator | 2026-01-08 01:18:25 | INFO  | Setting property hw_rng_model: virtio 2026-01-08 01:18:46.479240 | orchestrator | 2026-01-08 01:18:26 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-08 01:18:46.479245 | orchestrator | 2026-01-08 01:18:26 | INFO  | Setting property hw_watchdog_action: reset 2026-01-08 01:18:46.479250 | orchestrator | 2026-01-08 01:18:26 | INFO  | Setting property hypervisor_type: qemu 2026-01-08 01:18:46.479254 | orchestrator | 2026-01-08 01:18:26 | INFO  | Setting property os_distro: cirros 2026-01-08 01:18:46.479257 | orchestrator | 2026-01-08 01:18:26 | INFO  | Setting property os_purpose: minimal 2026-01-08 01:18:46.479260 | orchestrator | 2026-01-08 01:18:26 | INFO  | Setting property replace_frequency: never 2026-01-08 01:18:46.479263 | orchestrator | 2026-01-08 01:18:27 | INFO  | Setting property uuid_validity: none 2026-01-08 01:18:46.479266 | orchestrator | 2026-01-08 01:18:27 | INFO  | Setting property provided_until: none 2026-01-08 01:18:46.479269 | orchestrator | 2026-01-08 01:18:27 | INFO  | Setting property image_description: Cirros 2026-01-08 01:18:46.479272 | orchestrator | 2026-01-08 01:18:27 | INFO  | Setting property image_name: Cirros 2026-01-08 01:18:46.479275 | orchestrator | 2026-01-08 01:18:27 | INFO  | Setting property internal_version: 0.6.2 2026-01-08 01:18:46.479278 | orchestrator | 2026-01-08 01:18:27 | INFO  | Setting property image_original_user: cirros 2026-01-08 01:18:46.479294 | orchestrator | 2026-01-08 01:18:27 | INFO  | Setting property os_version: 0.6.2 2026-01-08 01:18:46.479300 | orchestrator | 2026-01-08 01:18:28 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-08 01:18:46.479304 | orchestrator | 2026-01-08 01:18:28 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-08 01:18:46.479307 | orchestrator | 2026-01-08 01:18:28 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-08 01:18:46.479310 | orchestrator | 2026-01-08 01:18:28 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-08 01:18:46.479313 | orchestrator | 2026-01-08 01:18:28 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-08 01:18:46.479318 | orchestrator | 2026-01-08 01:18:28 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-08 01:18:46.479327 | orchestrator | 2026-01-08 01:18:28 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-08 01:18:46.479335 | orchestrator | 2026-01-08 01:18:28 | INFO  | Importing image Cirros 0.6.3 2026-01-08 01:18:46.479340 | orchestrator | 2026-01-08 01:18:28 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-08 01:18:46.479346 | orchestrator | 2026-01-08 01:18:30 | INFO  | Waiting for image to leave queued state... 2026-01-08 01:18:46.479350 | orchestrator | 2026-01-08 01:18:32 | INFO  | Waiting for import to complete... 2026-01-08 01:18:46.479365 | orchestrator | 2026-01-08 01:18:42 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-08 01:18:46.479370 | orchestrator | 2026-01-08 01:18:42 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-08 01:18:46.479375 | orchestrator | 2026-01-08 01:18:42 | INFO  | Setting internal_version = 0.6.3 2026-01-08 01:18:46.479380 | orchestrator | 2026-01-08 01:18:42 | INFO  | Setting image_original_user = cirros 2026-01-08 01:18:46.479385 | orchestrator | 2026-01-08 01:18:42 | INFO  | Adding tag os:cirros 2026-01-08 01:18:46.479390 | orchestrator | 2026-01-08 01:18:42 | INFO  | Setting property architecture: x86_64 2026-01-08 01:18:46.479395 | orchestrator | 2026-01-08 01:18:42 | INFO  | Setting property hw_disk_bus: scsi 2026-01-08 01:18:46.479406 | orchestrator | 2026-01-08 01:18:42 | INFO  | Setting property hw_rng_model: virtio 2026-01-08 01:18:46.479412 | orchestrator | 2026-01-08 01:18:43 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-08 01:18:46.479418 | orchestrator | 2026-01-08 01:18:43 | INFO  | Setting property hw_watchdog_action: reset 2026-01-08 01:18:46.479424 | orchestrator | 2026-01-08 01:18:43 | INFO  | Setting property hypervisor_type: qemu 2026-01-08 01:18:46.479430 | orchestrator | 2026-01-08 01:18:43 | INFO  | Setting property os_distro: cirros 2026-01-08 01:18:46.479435 | orchestrator | 2026-01-08 01:18:44 | INFO  | Setting property os_purpose: minimal 2026-01-08 01:18:46.479441 | orchestrator | 2026-01-08 01:18:44 | INFO  | Setting property replace_frequency: never 2026-01-08 01:18:46.479447 | orchestrator | 2026-01-08 01:18:44 | INFO  | Setting property uuid_validity: none 2026-01-08 01:18:46.479453 | orchestrator | 2026-01-08 01:18:44 | INFO  | Setting property provided_until: none 2026-01-08 01:18:46.479458 | orchestrator | 2026-01-08 01:18:44 | INFO  | Setting property image_description: Cirros 2026-01-08 01:18:46.479463 | orchestrator | 2026-01-08 01:18:44 | INFO  | Setting property image_name: Cirros 2026-01-08 01:18:46.479468 | orchestrator | 2026-01-08 01:18:45 | INFO  | Setting property internal_version: 0.6.3 2026-01-08 01:18:46.479479 | orchestrator | 2026-01-08 01:18:45 | INFO  | Setting property image_original_user: cirros 2026-01-08 01:18:46.479484 | orchestrator | 2026-01-08 01:18:45 | INFO  | Setting property os_version: 0.6.3 2026-01-08 01:18:46.479489 | orchestrator | 2026-01-08 01:18:45 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-08 01:18:46.479494 | orchestrator | 2026-01-08 01:18:45 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-08 01:18:46.479499 | orchestrator | 2026-01-08 01:18:45 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-08 01:18:46.479504 | orchestrator | 2026-01-08 01:18:45 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-08 01:18:46.479510 | orchestrator | 2026-01-08 01:18:45 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-08 01:18:46.806407 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-08 01:18:49.144141 | orchestrator | 2026-01-08 01:18:49 | INFO  | date: 2026-01-07 2026-01-08 01:18:49.144250 | orchestrator | 2026-01-08 01:18:49 | INFO  | image: octavia-amphora-haproxy-2025.1.20260107.qcow2 2026-01-08 01:18:49.144445 | orchestrator | 2026-01-08 01:18:49 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260107.qcow2 2026-01-08 01:18:49.145192 | orchestrator | 2026-01-08 01:18:49 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260107.qcow2.CHECKSUM 2026-01-08 01:18:49.428460 | orchestrator | 2026-01-08 01:18:49 | INFO  | checksum: bf514e9e1d697129cf8dddcfb410af0d63bc459a4d04eba20d86ba078865c5db 2026-01-08 01:18:49.506647 | orchestrator | 2026-01-08 01:18:49 | INFO  | It takes a moment until task 49fb6dc8-b910-4fc6-bc06-16f3e47979c7 (image-manager) has been started and output is visible here. 2026-01-08 01:19:59.765843 | orchestrator | 2026-01-08 01:18:51 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-07' 2026-01-08 01:19:59.765911 | orchestrator | 2026-01-08 01:18:51 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260107.qcow2: 200 2026-01-08 01:19:59.765922 | orchestrator | 2026-01-08 01:18:51 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-07 2026-01-08 01:19:59.765931 | orchestrator | 2026-01-08 01:18:51 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260107.qcow2 2026-01-08 01:19:59.765939 | orchestrator | 2026-01-08 01:18:53 | INFO  | Waiting for image to leave queued state... 2026-01-08 01:19:59.765946 | orchestrator | 2026-01-08 01:18:55 | INFO  | Waiting for import to complete... 2026-01-08 01:19:59.765954 | orchestrator | 2026-01-08 01:19:05 | INFO  | Waiting for import to complete... 2026-01-08 01:19:59.765962 | orchestrator | 2026-01-08 01:19:15 | INFO  | Waiting for import to complete... 2026-01-08 01:19:59.765969 | orchestrator | 2026-01-08 01:19:25 | INFO  | Waiting for import to complete... 2026-01-08 01:19:59.765978 | orchestrator | 2026-01-08 01:19:35 | INFO  | Waiting for import to complete... 2026-01-08 01:19:59.765986 | orchestrator | 2026-01-08 01:19:45 | INFO  | Waiting for import to complete... 2026-01-08 01:19:59.765994 | orchestrator | 2026-01-08 01:19:55 | INFO  | Import of 'OpenStack Octavia Amphora 2026-01-07' successfully completed, reloading images 2026-01-08 01:19:59.766002 | orchestrator | 2026-01-08 01:19:55 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-01-07' 2026-01-08 01:19:59.766087 | orchestrator | 2026-01-08 01:19:55 | INFO  | Setting internal_version = 2026-01-07 2026-01-08 01:19:59.766098 | orchestrator | 2026-01-08 01:19:55 | INFO  | Setting image_original_user = ubuntu 2026-01-08 01:19:59.766105 | orchestrator | 2026-01-08 01:19:55 | INFO  | Adding tag amphora 2026-01-08 01:19:59.766111 | orchestrator | 2026-01-08 01:19:56 | INFO  | Adding tag os:ubuntu 2026-01-08 01:19:59.766118 | orchestrator | 2026-01-08 01:19:56 | INFO  | Setting property architecture: x86_64 2026-01-08 01:19:59.766124 | orchestrator | 2026-01-08 01:19:56 | INFO  | Setting property hw_disk_bus: scsi 2026-01-08 01:19:59.766131 | orchestrator | 2026-01-08 01:19:56 | INFO  | Setting property hw_rng_model: virtio 2026-01-08 01:19:59.766137 | orchestrator | 2026-01-08 01:19:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-08 01:19:59.766144 | orchestrator | 2026-01-08 01:19:57 | INFO  | Setting property hw_watchdog_action: reset 2026-01-08 01:19:59.766150 | orchestrator | 2026-01-08 01:19:57 | INFO  | Setting property hypervisor_type: qemu 2026-01-08 01:19:59.766156 | orchestrator | 2026-01-08 01:19:57 | INFO  | Setting property os_distro: ubuntu 2026-01-08 01:19:59.766162 | orchestrator | 2026-01-08 01:19:57 | INFO  | Setting property replace_frequency: quarterly 2026-01-08 01:19:59.766168 | orchestrator | 2026-01-08 01:19:57 | INFO  | Setting property uuid_validity: last-1 2026-01-08 01:19:59.766174 | orchestrator | 2026-01-08 01:19:58 | INFO  | Setting property provided_until: none 2026-01-08 01:19:59.766180 | orchestrator | 2026-01-08 01:19:58 | INFO  | Setting property os_purpose: network 2026-01-08 01:19:59.766191 | orchestrator | 2026-01-08 01:19:58 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-01-08 01:19:59.766197 | orchestrator | 2026-01-08 01:19:58 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-01-08 01:19:59.766203 | orchestrator | 2026-01-08 01:19:58 | INFO  | Setting property internal_version: 2026-01-07 2026-01-08 01:19:59.766209 | orchestrator | 2026-01-08 01:19:58 | INFO  | Setting property image_original_user: ubuntu 2026-01-08 01:19:59.766215 | orchestrator | 2026-01-08 01:19:59 | INFO  | Setting property os_version: 2026-01-07 2026-01-08 01:19:59.766222 | orchestrator | 2026-01-08 01:19:59 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2025.1.20260107.qcow2 2026-01-08 01:19:59.766228 | orchestrator | 2026-01-08 01:19:59 | INFO  | Setting property image_build_date: 2026-01-07 2026-01-08 01:19:59.766234 | orchestrator | 2026-01-08 01:19:59 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-01-07' 2026-01-08 01:19:59.766240 | orchestrator | 2026-01-08 01:19:59 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-01-07' 2026-01-08 01:19:59.766258 | orchestrator | 2026-01-08 01:19:59 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-08 01:19:59.766264 | orchestrator | 2026-01-08 01:19:59 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-08 01:19:59.766270 | orchestrator | 2026-01-08 01:19:59 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-08 01:19:59.766276 | orchestrator | 2026-01-08 01:19:59 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-08 01:20:00.232645 | orchestrator | ok: Runtime: 0:03:09.158176 2026-01-08 01:20:00.257831 | 2026-01-08 01:20:00.258042 | TASK [Run checks] 2026-01-08 01:20:00.957241 | orchestrator | + set -e 2026-01-08 01:20:00.957349 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-08 01:20:00.957358 | orchestrator | ++ export INTERACTIVE=false 2026-01-08 01:20:00.957365 | orchestrator | ++ INTERACTIVE=false 2026-01-08 01:20:00.957369 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-08 01:20:00.957373 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-08 01:20:00.957378 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-08 01:20:00.958378 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-08 01:20:00.964612 | orchestrator | 2026-01-08 01:20:00.964679 | orchestrator | # CHECK 2026-01-08 01:20:00.964686 | orchestrator | 2026-01-08 01:20:00.964692 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-08 01:20:00.964701 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-08 01:20:00.964706 | orchestrator | + echo 2026-01-08 01:20:00.964711 | orchestrator | + echo '# CHECK' 2026-01-08 01:20:00.964717 | orchestrator | + echo 2026-01-08 01:20:00.964725 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-08 01:20:00.965687 | orchestrator | ++ semver latest 5.0.0 2026-01-08 01:20:01.034436 | orchestrator | 2026-01-08 01:20:01.034495 | orchestrator | ## Containers @ testbed-manager 2026-01-08 01:20:01.034502 | orchestrator | 2026-01-08 01:20:01.034509 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-08 01:20:01.034515 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-08 01:20:01.034520 | orchestrator | + echo 2026-01-08 01:20:01.034526 | orchestrator | + echo '## Containers @ testbed-manager' 2026-01-08 01:20:01.034532 | orchestrator | + echo 2026-01-08 01:20:01.034538 | orchestrator | + osism container testbed-manager ps 2026-01-08 01:20:03.125857 | orchestrator | 2026-01-08 01:20:03 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-01-08 01:20:03.532886 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-08 01:20:03.532948 | orchestrator | 7e00ab5dbe90 registry.osism.tech/kolla/prometheus-blackbox-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_blackbox_exporter 2026-01-08 01:20:03.532956 | orchestrator | c37cb3102b70 registry.osism.tech/kolla/prometheus-alertmanager:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_alertmanager 2026-01-08 01:20:03.532961 | orchestrator | e91f6cafa973 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-01-08 01:20:03.532968 | orchestrator | cc5ccee25090 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-01-08 01:20:03.532974 | orchestrator | ee55e2b8c3f2 registry.osism.tech/kolla/prometheus-server:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_server 2026-01-08 01:20:03.532979 | orchestrator | cf51d7c35df4 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2026-01-08 01:20:03.532983 | orchestrator | 312d3ab0092f registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-01-08 01:20:03.532987 | orchestrator | 0b544c08b9f3 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-08 01:20:03.533004 | orchestrator | 316081bfe751 registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-01-08 01:20:03.533008 | orchestrator | 8cb5d2fda7f0 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2026-01-08 01:20:03.533011 | orchestrator | f00472c876eb registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 33 minutes ago Up 32 minutes openstackclient 2026-01-08 01:20:03.533015 | orchestrator | 3715fae75587 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 33 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2026-01-08 01:20:03.533019 | orchestrator | 83fd16c27652 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 57 minutes ago Up 56 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-01-08 01:20:03.533023 | orchestrator | 7aae397ce82d registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2026-01-08 01:20:03.533027 | orchestrator | 17d3b857bab0 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-kubernetes 2026-01-08 01:20:03.533041 | orchestrator | 15623c0b3ca6 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2026-01-08 01:20:03.533045 | orchestrator | 7ecc75610761 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2026-01-08 01:20:03.533049 | orchestrator | ca639c046487 registry.osism.tech/osism/kolla-ansible:2025.1 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2026-01-08 01:20:03.533053 | orchestrator | 02626c22fc56 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2026-01-08 01:20:03.533057 | orchestrator | c5d980f5ba59 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-listener-1 2026-01-08 01:20:03.533061 | orchestrator | fef608ff88a8 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-openstack-1 2026-01-08 01:20:03.533065 | orchestrator | 4836cff6bd0d registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" About an hour ago Up 40 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-01-08 01:20:03.533069 | orchestrator | e10911c01ade registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-01-08 01:20:03.533076 | orchestrator | 15fed731ebc5 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 40 minutes (healthy) osismclient 2026-01-08 01:20:03.533080 | orchestrator | cd8d18fe040c registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2026-01-08 01:20:03.533083 | orchestrator | 23ad2bcceb1c registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2026-01-08 01:20:03.533087 | orchestrator | ec83546f38fb registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-flower-1 2026-01-08 01:20:03.533091 | orchestrator | 2d888dd9ed6a registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-beat-1 2026-01-08 01:20:03.533095 | orchestrator | e5b1b1c24f74 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-01-08 01:20:03.855819 | orchestrator | 2026-01-08 01:20:03.855882 | orchestrator | ## Images @ testbed-manager 2026-01-08 01:20:03.855889 | orchestrator | 2026-01-08 01:20:03.855895 | orchestrator | + echo 2026-01-08 01:20:03.855901 | orchestrator | + echo '## Images @ testbed-manager' 2026-01-08 01:20:03.855907 | orchestrator | + echo 2026-01-08 01:20:03.855916 | orchestrator | + osism container testbed-manager images 2026-01-08 01:20:06.314900 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-08 01:20:06.314975 | orchestrator | registry.osism.tech/osism/osism-ansible latest 4e8db382ca62 About an hour ago 611MB 2026-01-08 01:20:06.314986 | orchestrator | registry.osism.tech/osism/kolla-ansible 2025.1 9504bd40f13e About an hour ago 610MB 2026-01-08 01:20:06.314995 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 812309186f7c About an hour ago 560MB 2026-01-08 01:20:06.315003 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 5f0280d24e89 About an hour ago 1.23GB 2026-01-08 01:20:06.315010 | orchestrator | registry.osism.tech/osism/osism latest e590936ce05d About an hour ago 384MB 2026-01-08 01:20:06.315018 | orchestrator | registry.osism.tech/osism/osism-frontend latest ce644a5675a2 About an hour ago 239MB 2026-01-08 01:20:06.315025 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 7e951821461a About an hour ago 335MB 2026-01-08 01:20:06.315033 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 110e4a929f7b 22 hours ago 211MB 2026-01-08 01:20:06.315040 | orchestrator | registry.osism.tech/osism/cephclient reef 4fd40d5e6381 22 hours ago 453MB 2026-01-08 01:20:06.315048 | orchestrator | registry.osism.tech/kolla/cron 2025.1 303e16428a6b 23 hours ago 271MB 2026-01-08 01:20:06.315056 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 1b11dcf76817 23 hours ago 584MB 2026-01-08 01:20:06.315064 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 5c7aa618d82d 23 hours ago 679MB 2026-01-08 01:20:06.315072 | orchestrator | registry.osism.tech/kolla/prometheus-server 2025.1 8ae631080a0f 23 hours ago 855MB 2026-01-08 01:20:06.315095 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 c51c1d344aed 23 hours ago 311MB 2026-01-08 01:20:06.315103 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2025.1 c40be03919a0 23 hours ago 409MB 2026-01-08 01:20:06.315110 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2025.1 64761ab06575 23 hours ago 313MB 2026-01-08 01:20:06.315117 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 f6347a1bd715 23 hours ago 363MB 2026-01-08 01:20:06.315125 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 weeks ago 11.5MB 2026-01-08 01:20:06.315132 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 7 weeks ago 334MB 2026-01-08 01:20:06.315140 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine 13105d2858de 2 months ago 41.4MB 2026-01-08 01:20:06.315147 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-01-08 01:20:06.315155 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 months ago 275MB 2026-01-08 01:20:06.315162 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 5 months ago 226MB 2026-01-08 01:20:06.315170 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 19 months ago 146MB 2026-01-08 01:20:06.658218 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-08 01:20:06.659046 | orchestrator | ++ semver latest 5.0.0 2026-01-08 01:20:06.699484 | orchestrator | 2026-01-08 01:20:06.699544 | orchestrator | ## Containers @ testbed-node-0 2026-01-08 01:20:06.699558 | orchestrator | 2026-01-08 01:20:06.699566 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-08 01:20:06.699574 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-08 01:20:06.699583 | orchestrator | + echo 2026-01-08 01:20:06.699591 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-01-08 01:20:06.699600 | orchestrator | + echo 2026-01-08 01:20:06.699608 | orchestrator | + osism container testbed-node-0 ps 2026-01-08 01:20:09.131168 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-08 01:20:09.131221 | orchestrator | af69b283316c registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-08 01:20:09.131228 | orchestrator | 39a810410f43 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-08 01:20:09.131233 | orchestrator | b49695619cf8 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-01-08 01:20:09.131250 | orchestrator | a7a466f06a9f registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-08 01:20:09.131255 | orchestrator | f54b8168cd6d registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-08 01:20:09.131260 | orchestrator | 60d0c2ec4d5d registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2026-01-08 01:20:09.131264 | orchestrator | c44e20aeb9e6 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-01-08 01:20:09.131269 | orchestrator | 1bb6c500962c registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_metadata 2026-01-08 01:20:09.131282 | orchestrator | aca1ad118892 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-08 01:20:09.131287 | orchestrator | 2fe6e32a9e18 registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-08 01:20:09.131294 | orchestrator | c61512169d5e registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_backup 2026-01-08 01:20:09.131300 | orchestrator | c7532521fdc1 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-08 01:20:09.131306 | orchestrator | 1675f01e9d56 registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-08 01:20:09.131312 | orchestrator | 08b1c08e48c5 registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-08 01:20:09.131319 | orchestrator | 42406282fd9f registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-08 01:20:09.131325 | orchestrator | 7df833b25b23 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 12 minutes ago Up 12 minutes grafana 2026-01-08 01:20:09.131332 | orchestrator | 9d4c4b56963e registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-01-08 01:20:09.131338 | orchestrator | 3252e5407e15 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-01-08 01:20:09.131344 | orchestrator | b5afd448a65f registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-01-08 01:20:09.131348 | orchestrator | 784c94c5ea65 registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2026-01-08 01:20:09.131351 | orchestrator | cae7fcadb953 registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2026-01-08 01:20:09.131363 | orchestrator | a0bebe4e79f1 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2026-01-08 01:20:09.131370 | orchestrator | a0c6dc6e93e4 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2026-01-08 01:20:09.131374 | orchestrator | 2bb2226cf7fe registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2026-01-08 01:20:09.131378 | orchestrator | 417ae5ff3b6c registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-08 01:20:09.131384 | orchestrator | 6979fa654972 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-08 01:20:09.131388 | orchestrator | c6918e87e7a6 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-01-08 01:20:09.131392 | orchestrator | ebb7a934ddf7 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2026-01-08 01:20:09.131412 | orchestrator | cb392f2ce950 registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2026-01-08 01:20:09.131416 | orchestrator | 6555044f801f registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2026-01-08 01:20:09.131423 | orchestrator | 6eeffbc0d7e5 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2026-01-08 01:20:09.131429 | orchestrator | 98e76aad87c3 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2026-01-08 01:20:09.131436 | orchestrator | a130ee913a27 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-01-08 01:20:09.131442 | orchestrator | 7efed92367f6 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2026-01-08 01:20:09.131448 | orchestrator | 3719081216c9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2026-01-08 01:20:09.131454 | orchestrator | a6e44ab45241 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2026-01-08 01:20:09.131461 | orchestrator | 0fd5caa53f83 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-01-08 01:20:09.131467 | orchestrator | 04c3301dcbf0 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-01-08 01:20:09.131486 | orchestrator | e8b6cce08b78 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-01-08 01:20:09.131614 | orchestrator | e711112d87cd registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2026-01-08 01:20:09.131626 | orchestrator | 352fe5ee65c8 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-01-08 01:20:09.131633 | orchestrator | 3863c2211c4b registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-01-08 01:20:09.131640 | orchestrator | 2d9632868cc0 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-01-08 01:20:09.131647 | orchestrator | 6647c0a4e11a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2026-01-08 01:20:09.131653 | orchestrator | 0ae72dd03a39 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-01-08 01:20:09.131660 | orchestrator | 2068675374db registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-01-08 01:20:09.131671 | orchestrator | 7b1780e5d00c registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-01-08 01:20:09.131683 | orchestrator | 16bf988ad7f3 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db_relay_1 2026-01-08 01:20:09.131690 | orchestrator | d15400d9c7a2 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2026-01-08 01:20:09.131697 | orchestrator | 97d2d59387f6 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2026-01-08 01:20:09.131703 | orchestrator | d9211a38ecfb registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-01-08 01:20:09.131771 | orchestrator | d4d6f549a26f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-0 2026-01-08 01:20:09.131783 | orchestrator | 77a2d2278aed registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-01-08 01:20:09.131789 | orchestrator | 73de94ca0cc1 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-01-08 01:20:09.131796 | orchestrator | f9722e1e124d registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-01-08 01:20:09.131809 | orchestrator | c61adfbee174 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-01-08 01:20:09.131820 | orchestrator | 5f860672063a registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-08 01:20:09.131826 | orchestrator | c9de667b912b registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-08 01:20:09.131833 | orchestrator | a141b5accca1 registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-01-08 01:20:09.131839 | orchestrator | 73fcf8adc0dc registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes kolla_toolbox 2026-01-08 01:20:09.131846 | orchestrator | 2da88717511a registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-01-08 01:20:09.458356 | orchestrator | 2026-01-08 01:20:09.458416 | orchestrator | ## Images @ testbed-node-0 2026-01-08 01:20:09.458427 | orchestrator | 2026-01-08 01:20:09.458431 | orchestrator | + echo 2026-01-08 01:20:09.458448 | orchestrator | + echo '## Images @ testbed-node-0' 2026-01-08 01:20:09.458453 | orchestrator | + echo 2026-01-08 01:20:09.458457 | orchestrator | + osism container testbed-node-0 images 2026-01-08 01:20:11.914084 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-08 01:20:11.914152 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 0c99c55b9df6 22 hours ago 1.27GB 2026-01-08 01:20:11.914160 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 b7e672e6889b 23 hours ago 279MB 2026-01-08 01:20:11.914167 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 cba5e0b41e63 23 hours ago 1.56GB 2026-01-08 01:20:11.914173 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 7560a51b902b 23 hours ago 1.53GB 2026-01-08 01:20:11.914179 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 38cdae39937a 23 hours ago 1.02GB 2026-01-08 01:20:11.914186 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 560177faa422 23 hours ago 282MB 2026-01-08 01:20:11.914205 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 07811fa0309c 23 hours ago 344MB 2026-01-08 01:20:11.914212 | orchestrator | registry.osism.tech/kolla/cron 2025.1 303e16428a6b 23 hours ago 271MB 2026-01-08 01:20:11.914218 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 1b11dcf76817 23 hours ago 584MB 2026-01-08 01:20:11.914224 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 5c7aa618d82d 23 hours ago 679MB 2026-01-08 01:20:11.914230 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 03feb8c06cbf 23 hours ago 272MB 2026-01-08 01:20:11.914236 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 fca6414d3e37 23 hours ago 417MB 2026-01-08 01:20:11.914242 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 1dec30422405 23 hours ago 1.2GB 2026-01-08 01:20:11.914247 | orchestrator | registry.osism.tech/kolla/redis 2025.1 9c3c95d6a9b1 23 hours ago 278MB 2026-01-08 01:20:11.914253 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 b0a0b7625646 23 hours ago 278MB 2026-01-08 01:20:11.914259 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 4fd99ce5e261 23 hours ago 457MB 2026-01-08 01:20:11.914264 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 92280230149e 23 hours ago 287MB 2026-01-08 01:20:11.914270 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 700a10c15654 23 hours ago 287MB 2026-01-08 01:20:11.914276 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 c17c794182d5 23 hours ago 297MB 2026-01-08 01:20:11.914281 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 5ab59bd749db 23 hours ago 306MB 2026-01-08 01:20:11.914287 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 1ffbc208ca63 23 hours ago 304MB 2026-01-08 01:20:11.914293 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 c51c1d344aed 23 hours ago 311MB 2026-01-08 01:20:11.914299 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 f6347a1bd715 23 hours ago 363MB 2026-01-08 01:20:11.914304 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 9b55b40f70ed 23 hours ago 1.23GB 2026-01-08 01:20:11.914310 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 ff7dcbda8159 23 hours ago 1.39GB 2026-01-08 01:20:11.914317 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 1d6c6d828718 23 hours ago 1.23GB 2026-01-08 01:20:11.914323 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 485725f63810 23 hours ago 1.23GB 2026-01-08 01:20:11.914329 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2025.1 caa50251cca4 23 hours ago 1.01GB 2026-01-08 01:20:11.914334 | orchestrator | registry.osism.tech/kolla/skyline-console 2025.1 9e5b47cd3f0c 23 hours ago 1.06GB 2026-01-08 01:20:11.914340 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 66747836589f 23 hours ago 1.01GB 2026-01-08 01:20:11.914346 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 76480022c96e 23 hours ago 1GB 2026-01-08 01:20:11.914352 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 b22f41166ad2 23 hours ago 1GB 2026-01-08 01:20:11.914358 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 8531051f945f 23 hours ago 1e+03MB 2026-01-08 01:20:11.914364 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 a4af0d4c4ae7 23 hours ago 1GB 2026-01-08 01:20:11.914369 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 703a4f87f3a0 23 hours ago 1.01GB 2026-01-08 01:20:11.914375 | orchestrator | registry.osism.tech/kolla/aodh-api 2025.1 5cec896e573f 23 hours ago 989MB 2026-01-08 01:20:11.914402 | orchestrator | registry.osism.tech/kolla/aodh-listener 2025.1 8ffeac8f9ddf 23 hours ago 990MB 2026-01-08 01:20:11.914408 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2025.1 7fed65c9115c 23 hours ago 990MB 2026-01-08 01:20:11.914414 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2025.1 2705366a3e3b 23 hours ago 990MB 2026-01-08 01:20:11.914420 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2025.1 d964b4623d57 23 hours ago 992MB 2026-01-08 01:20:11.914425 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2025.1 d16721db19c4 24 hours ago 991MB 2026-01-08 01:20:11.914431 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 0ad03b27ec5b 24 hours ago 1.15GB 2026-01-08 01:20:11.914437 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 7e0bc3b8ae4b 24 hours ago 1.26GB 2026-01-08 01:20:11.914443 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 af35e31a7880 24 hours ago 1.07GB 2026-01-08 01:20:11.914448 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 2035e610a14b 24 hours ago 1.05GB 2026-01-08 01:20:11.914454 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 2b74439df4fc 24 hours ago 1.07GB 2026-01-08 01:20:11.914460 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 59b8f29d4a41 24 hours ago 1.05GB 2026-01-08 01:20:11.914465 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 157a2d695be1 24 hours ago 1.05GB 2026-01-08 01:20:11.914471 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 d7714d51292d 24 hours ago 1.79GB 2026-01-08 01:20:11.914477 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 760d9f19d35f 24 hours ago 1.43GB 2026-01-08 01:20:11.914483 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 54775c88e976 24 hours ago 1.44GB 2026-01-08 01:20:11.914488 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 94e908490f52 24 hours ago 1.43GB 2026-01-08 01:20:11.914494 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 b273758bdd3e 24 hours ago 992MB 2026-01-08 01:20:11.914500 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 9d5e54e8f082 24 hours ago 1.05GB 2026-01-08 01:20:11.914506 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 29e1b1df5c70 24 hours ago 1.05GB 2026-01-08 01:20:11.914517 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 22484a9544f5 24 hours ago 1.1GB 2026-01-08 01:20:11.914523 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 a4b841768b0a 24 hours ago 1GB 2026-01-08 01:20:11.914529 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 4e5fff0ee992 24 hours ago 1GB 2026-01-08 01:20:11.914534 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 9ab1182c7d0c 24 hours ago 1e+03MB 2026-01-08 01:20:11.914540 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 c36c58dd5b6b 24 hours ago 1.23GB 2026-01-08 01:20:11.914546 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 3a6e0322a87a 24 hours ago 1.12GB 2026-01-08 01:20:11.914552 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 b1de780e3d39 24 hours ago 295MB 2026-01-08 01:20:11.914557 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 a3aed6cad525 24 hours ago 295MB 2026-01-08 01:20:11.914563 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 7c62f5c8f694 24 hours ago 295MB 2026-01-08 01:20:11.914569 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 242c25060037 24 hours ago 295MB 2026-01-08 01:20:11.914579 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 ef84fca3f266 24 hours ago 295MB 2026-01-08 01:20:12.258431 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-08 01:20:12.258500 | orchestrator | ++ semver latest 5.0.0 2026-01-08 01:20:12.311041 | orchestrator | 2026-01-08 01:20:12.311094 | orchestrator | ## Containers @ testbed-node-1 2026-01-08 01:20:12.311101 | orchestrator | 2026-01-08 01:20:12.311106 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-08 01:20:12.311110 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-08 01:20:12.311115 | orchestrator | + echo 2026-01-08 01:20:12.311120 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-01-08 01:20:12.311125 | orchestrator | + echo 2026-01-08 01:20:12.311130 | orchestrator | + osism container testbed-node-1 ps 2026-01-08 01:20:14.728078 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-08 01:20:14.728143 | orchestrator | 6e6ddf1ac531 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-08 01:20:14.728152 | orchestrator | 2911091fbfe4 registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-01-08 01:20:14.728158 | orchestrator | 6603a0dbe2c4 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-01-08 01:20:14.728163 | orchestrator | 3ba264594a08 registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-08 01:20:14.728168 | orchestrator | 58d399da33b0 registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-08 01:20:14.728177 | orchestrator | 7fe38010d74a registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2026-01-08 01:20:14.728182 | orchestrator | bf88287c0e06 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-01-08 01:20:14.728188 | orchestrator | ffb2d85e5636 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_metadata 2026-01-08 01:20:14.728193 | orchestrator | 12456e56bc00 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-08 01:20:14.728199 | orchestrator | 13fdfce7ace0 registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-08 01:20:14.728205 | orchestrator | 59153cef6f8a registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_backup 2026-01-08 01:20:14.728216 | orchestrator | 74f99d381d39 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-08 01:20:14.728222 | orchestrator | 491b4dd6b3fd registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-08 01:20:14.728228 | orchestrator | 5388e3f9e39e registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-08 01:20:14.728234 | orchestrator | 7c8573e29878 registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-08 01:20:14.728240 | orchestrator | 1e3476126218 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes grafana 2026-01-08 01:20:14.728260 | orchestrator | 3b312863f6ee registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-01-08 01:20:14.728267 | orchestrator | f4431e0dfdb9 registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-01-08 01:20:14.728274 | orchestrator | b939d1af2f57 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-01-08 01:20:14.728279 | orchestrator | bf31255f1d60 registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2026-01-08 01:20:14.728285 | orchestrator | 071617658efa registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2026-01-08 01:20:14.728302 | orchestrator | c681cabe0331 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2026-01-08 01:20:14.728309 | orchestrator | a11f9cdaaa47 registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2026-01-08 01:20:14.728315 | orchestrator | cd069c4fedc2 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2026-01-08 01:20:14.728321 | orchestrator | a7b051914340 registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-08 01:20:14.728327 | orchestrator | 167b63896646 registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2026-01-08 01:20:14.728334 | orchestrator | fca15c895347 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2026-01-08 01:20:14.728340 | orchestrator | a6a330674398 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2026-01-08 01:20:14.728346 | orchestrator | 2c17dd1bdc8e registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2026-01-08 01:20:14.728352 | orchestrator | d102d9c6c471 registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2026-01-08 01:20:14.728359 | orchestrator | 9ecfa6dbc4b9 registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2026-01-08 01:20:14.728364 | orchestrator | cae24b2e952f registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2026-01-08 01:20:14.728369 | orchestrator | 9dee88a0db08 registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2026-01-08 01:20:14.728375 | orchestrator | 9017bd35cc64 registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-01-08 01:20:14.728384 | orchestrator | 3350e3423fd7 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2026-01-08 01:20:14.728397 | orchestrator | 45ef3c1968e7 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2026-01-08 01:20:14.728403 | orchestrator | db5ed09db758 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-01-08 01:20:14.728408 | orchestrator | 7c48dda655e5 registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-01-08 01:20:14.728414 | orchestrator | 94157ff0d8e7 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-01-08 01:20:14.728421 | orchestrator | 0a95743a58c0 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-01-08 01:20:14.728427 | orchestrator | efe19225f320 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-01-08 01:20:14.728432 | orchestrator | a3daca425851 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 23 minutes ago Up 22 minutes (healthy) opensearch 2026-01-08 01:20:14.728439 | orchestrator | bb02ed44cda9 registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-01-08 01:20:14.728445 | orchestrator | 678eaee9c945 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2026-01-08 01:20:14.728456 | orchestrator | 7d28347994fc registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-01-08 01:20:14.728462 | orchestrator | 4fb2065d66c6 registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2026-01-08 01:20:14.728667 | orchestrator | d43e08f49f08 registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-01-08 01:20:14.728703 | orchestrator | 2336c5f2fd48 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db_relay_1 2026-01-08 01:20:14.728713 | orchestrator | 70d034fc3c12 registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-01-08 01:20:14.728733 | orchestrator | e8f0a0940e60 registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 26 minutes ovn_sb_db 2026-01-08 01:20:14.728741 | orchestrator | 30d1c7395e24 registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 26 minutes ovn_nb_db 2026-01-08 01:20:14.728745 | orchestrator | 224e7badb45c registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-01-08 01:20:14.728749 | orchestrator | 2ac7f7c19772 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-1 2026-01-08 01:20:14.728753 | orchestrator | 3db8b846d431 registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-01-08 01:20:14.728757 | orchestrator | 67b4bd77a654 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-01-08 01:20:14.728771 | orchestrator | a79567df4b8a registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-01-08 01:20:14.728775 | orchestrator | 78d155322f6c registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-08 01:20:14.728779 | orchestrator | 29cf3e357ae0 registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-01-08 01:20:14.728782 | orchestrator | e268c4535adb registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-01-08 01:20:14.728786 | orchestrator | 710b713b3b9d registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-08 01:20:14.728790 | orchestrator | 4d39c4634e2b registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-01-08 01:20:15.052623 | orchestrator | 2026-01-08 01:20:15.052684 | orchestrator | ## Images @ testbed-node-1 2026-01-08 01:20:15.052707 | orchestrator | + echo 2026-01-08 01:20:15.052715 | orchestrator | + echo '## Images @ testbed-node-1' 2026-01-08 01:20:15.052751 | orchestrator | + echo 2026-01-08 01:20:15.052758 | orchestrator | 2026-01-08 01:20:15.052765 | orchestrator | + osism container testbed-node-1 images 2026-01-08 01:20:17.477051 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-08 01:20:17.477114 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 0c99c55b9df6 22 hours ago 1.27GB 2026-01-08 01:20:17.477121 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 b7e672e6889b 23 hours ago 279MB 2026-01-08 01:20:17.477136 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 cba5e0b41e63 23 hours ago 1.56GB 2026-01-08 01:20:17.477142 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 7560a51b902b 23 hours ago 1.53GB 2026-01-08 01:20:17.477147 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 560177faa422 23 hours ago 282MB 2026-01-08 01:20:17.477153 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 38cdae39937a 23 hours ago 1.02GB 2026-01-08 01:20:17.477158 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 07811fa0309c 23 hours ago 344MB 2026-01-08 01:20:17.477164 | orchestrator | registry.osism.tech/kolla/cron 2025.1 303e16428a6b 23 hours ago 271MB 2026-01-08 01:20:17.477169 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 1b11dcf76817 23 hours ago 584MB 2026-01-08 01:20:17.477174 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 5c7aa618d82d 23 hours ago 679MB 2026-01-08 01:20:17.477180 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 fca6414d3e37 23 hours ago 417MB 2026-01-08 01:20:17.477185 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 03feb8c06cbf 23 hours ago 272MB 2026-01-08 01:20:17.477190 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 1dec30422405 23 hours ago 1.2GB 2026-01-08 01:20:17.477196 | orchestrator | registry.osism.tech/kolla/redis 2025.1 9c3c95d6a9b1 23 hours ago 278MB 2026-01-08 01:20:17.477201 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 b0a0b7625646 23 hours ago 278MB 2026-01-08 01:20:17.477206 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 4fd99ce5e261 23 hours ago 457MB 2026-01-08 01:20:17.477212 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 92280230149e 23 hours ago 287MB 2026-01-08 01:20:17.477231 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 700a10c15654 23 hours ago 287MB 2026-01-08 01:20:17.477236 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 c17c794182d5 23 hours ago 297MB 2026-01-08 01:20:17.477242 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 5ab59bd749db 23 hours ago 306MB 2026-01-08 01:20:17.477247 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 1ffbc208ca63 23 hours ago 304MB 2026-01-08 01:20:17.477257 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 c51c1d344aed 23 hours ago 311MB 2026-01-08 01:20:17.477263 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 f6347a1bd715 23 hours ago 363MB 2026-01-08 01:20:17.477268 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 9b55b40f70ed 23 hours ago 1.23GB 2026-01-08 01:20:17.477274 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 ff7dcbda8159 23 hours ago 1.39GB 2026-01-08 01:20:17.477279 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 1d6c6d828718 23 hours ago 1.23GB 2026-01-08 01:20:17.477285 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 485725f63810 23 hours ago 1.23GB 2026-01-08 01:20:17.477290 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 66747836589f 23 hours ago 1.01GB 2026-01-08 01:20:17.477296 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 76480022c96e 23 hours ago 1GB 2026-01-08 01:20:17.477301 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 b22f41166ad2 23 hours ago 1GB 2026-01-08 01:20:17.477307 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 8531051f945f 23 hours ago 1e+03MB 2026-01-08 01:20:17.477312 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 a4af0d4c4ae7 23 hours ago 1GB 2026-01-08 01:20:17.477317 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 703a4f87f3a0 23 hours ago 1.01GB 2026-01-08 01:20:17.477322 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 0ad03b27ec5b 24 hours ago 1.15GB 2026-01-08 01:20:17.477328 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 7e0bc3b8ae4b 24 hours ago 1.26GB 2026-01-08 01:20:17.477333 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 af35e31a7880 24 hours ago 1.07GB 2026-01-08 01:20:17.477349 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 2035e610a14b 24 hours ago 1.05GB 2026-01-08 01:20:17.477355 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 2b74439df4fc 24 hours ago 1.07GB 2026-01-08 01:20:17.477360 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 59b8f29d4a41 24 hours ago 1.05GB 2026-01-08 01:20:17.477365 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 157a2d695be1 24 hours ago 1.05GB 2026-01-08 01:20:17.477371 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 d7714d51292d 24 hours ago 1.79GB 2026-01-08 01:20:17.477376 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 760d9f19d35f 24 hours ago 1.43GB 2026-01-08 01:20:17.477381 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 54775c88e976 24 hours ago 1.44GB 2026-01-08 01:20:17.477387 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 94e908490f52 24 hours ago 1.43GB 2026-01-08 01:20:17.477392 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 b273758bdd3e 24 hours ago 992MB 2026-01-08 01:20:17.477398 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 9d5e54e8f082 24 hours ago 1.05GB 2026-01-08 01:20:17.477409 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 29e1b1df5c70 24 hours ago 1.05GB 2026-01-08 01:20:17.477419 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 22484a9544f5 24 hours ago 1.1GB 2026-01-08 01:20:17.477424 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 a4b841768b0a 24 hours ago 1GB 2026-01-08 01:20:17.477430 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 4e5fff0ee992 24 hours ago 1GB 2026-01-08 01:20:17.477435 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 9ab1182c7d0c 24 hours ago 1e+03MB 2026-01-08 01:20:17.477440 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 c36c58dd5b6b 24 hours ago 1.23GB 2026-01-08 01:20:17.477446 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 3a6e0322a87a 24 hours ago 1.12GB 2026-01-08 01:20:17.477451 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 b1de780e3d39 24 hours ago 295MB 2026-01-08 01:20:17.477456 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 a3aed6cad525 24 hours ago 295MB 2026-01-08 01:20:17.477462 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 7c62f5c8f694 24 hours ago 295MB 2026-01-08 01:20:17.477467 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 242c25060037 24 hours ago 295MB 2026-01-08 01:20:17.477473 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 ef84fca3f266 24 hours ago 295MB 2026-01-08 01:20:17.797193 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-08 01:20:17.797763 | orchestrator | ++ semver latest 5.0.0 2026-01-08 01:20:17.858290 | orchestrator | 2026-01-08 01:20:17.858348 | orchestrator | ## Containers @ testbed-node-2 2026-01-08 01:20:17.858357 | orchestrator | 2026-01-08 01:20:17.858364 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-08 01:20:17.858370 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-08 01:20:17.858377 | orchestrator | + echo 2026-01-08 01:20:17.858383 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-01-08 01:20:17.858390 | orchestrator | + echo 2026-01-08 01:20:17.858397 | orchestrator | + osism container testbed-node-2 ps 2026-01-08 01:20:20.354524 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-08 01:20:20.354585 | orchestrator | dced4c9971a6 registry.osism.tech/kolla/octavia-worker:2025.1 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-01-08 01:20:20.354595 | orchestrator | 4ad99c71bdef registry.osism.tech/kolla/octavia-housekeeping:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-01-08 01:20:20.354603 | orchestrator | 3f5504af6e70 registry.osism.tech/kolla/octavia-health-manager:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2026-01-08 01:20:20.354609 | orchestrator | de83c74e181a registry.osism.tech/kolla/octavia-driver-agent:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2026-01-08 01:20:20.354616 | orchestrator | 76419a057031 registry.osism.tech/kolla/octavia-api:2025.1 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2026-01-08 01:20:20.354622 | orchestrator | a7095cf650e6 registry.osism.tech/kolla/nova-novncproxy:2025.1 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) nova_novncproxy 2026-01-08 01:20:20.354628 | orchestrator | 57161dd4ff45 registry.osism.tech/kolla/nova-conductor:2025.1 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2026-01-08 01:20:20.354635 | orchestrator | c88dd341db4b registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_metadata 2026-01-08 01:20:20.354690 | orchestrator | 7f901f296478 registry.osism.tech/kolla/nova-api:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2026-01-08 01:20:20.354711 | orchestrator | 6654f3000b26 registry.osism.tech/kolla/nova-scheduler:2025.1 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2026-01-08 01:20:20.354750 | orchestrator | ec6312c4b903 registry.osism.tech/kolla/cinder-backup:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_backup 2026-01-08 01:20:20.354764 | orchestrator | fdca9c0c90f6 registry.osism.tech/kolla/cinder-volume:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_volume 2026-01-08 01:20:20.354770 | orchestrator | 78487a11ae7b registry.osism.tech/kolla/cinder-scheduler:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2026-01-08 01:20:20.354782 | orchestrator | b6283586b95a registry.osism.tech/kolla/glance-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2026-01-08 01:20:20.354788 | orchestrator | 9787ad5909fb registry.osism.tech/kolla/cinder-api:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2026-01-08 01:20:20.354794 | orchestrator | 5b2dfac52944 registry.osism.tech/kolla/grafana:2025.1 "dumb-init --single-…" 11 minutes ago Up 11 minutes grafana 2026-01-08 01:20:20.354800 | orchestrator | efb97af76fb0 registry.osism.tech/kolla/magnum-conductor:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2026-01-08 01:20:20.354807 | orchestrator | ad4c910df71c registry.osism.tech/kolla/magnum-api:2025.1 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2026-01-08 01:20:20.354813 | orchestrator | 3d25c61da7c5 registry.osism.tech/kolla/placement-api:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2026-01-08 01:20:20.354819 | orchestrator | 17ef609d066d registry.osism.tech/kolla/neutron-server:2025.1 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2026-01-08 01:20:20.354825 | orchestrator | 2816fc77871f registry.osism.tech/kolla/designate-worker:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2026-01-08 01:20:20.354840 | orchestrator | 960cd2c374c7 registry.osism.tech/kolla/designate-mdns:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2026-01-08 01:20:20.354847 | orchestrator | b7ab08e48bdb registry.osism.tech/kolla/designate-producer:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2026-01-08 01:20:20.354864 | orchestrator | bf3f121724ef registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2026-01-08 01:20:20.354871 | orchestrator | 72c22aa7ece4 registry.osism.tech/kolla/designate-central:2025.1 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2026-01-08 01:20:20.354877 | orchestrator | 91679b2d097d registry.osism.tech/kolla/prometheus-cadvisor:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_cadvisor 2026-01-08 01:20:20.354883 | orchestrator | 3e9d4daaef37 registry.osism.tech/kolla/designate-api:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2026-01-08 01:20:20.354889 | orchestrator | ec7fdf59e2b9 registry.osism.tech/kolla/designate-backend-bind9:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2026-01-08 01:20:20.354898 | orchestrator | 6b89378b03cd registry.osism.tech/kolla/prometheus-memcached-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2026-01-08 01:20:20.354904 | orchestrator | b2e67df53aab registry.osism.tech/kolla/barbican-worker:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2026-01-08 01:20:20.354911 | orchestrator | 01bd3fdfde6b registry.osism.tech/kolla/barbican-keystone-listener:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2026-01-08 01:20:20.354917 | orchestrator | a7b7bba2d393 registry.osism.tech/kolla/prometheus-mysqld-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2026-01-08 01:20:20.354926 | orchestrator | 412358faafeb registry.osism.tech/kolla/barbican-api:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2026-01-08 01:20:20.354932 | orchestrator | 704845f6e19e registry.osism.tech/kolla/prometheus-node-exporter:2025.1 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2026-01-08 01:20:20.354939 | orchestrator | b1fcce721a38 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2026-01-08 01:20:20.354945 | orchestrator | e949c2d1cef1 registry.osism.tech/kolla/keystone:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2026-01-08 01:20:20.354951 | orchestrator | 786543f215e0 registry.osism.tech/kolla/horizon:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2026-01-08 01:20:20.354957 | orchestrator | 057fa2f273ee registry.osism.tech/kolla/keystone-fernet:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2026-01-08 01:20:20.354964 | orchestrator | 1dcd2a27beb0 registry.osism.tech/kolla/keystone-ssh:2025.1 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2026-01-08 01:20:20.354971 | orchestrator | 13af6a7955d6 registry.osism.tech/kolla/mariadb-server:2025.1 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-01-08 01:20:20.354977 | orchestrator | 24788d34ce30 registry.osism.tech/kolla/opensearch-dashboards:2025.1 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-01-08 01:20:20.354984 | orchestrator | af475981be88 registry.osism.tech/kolla/opensearch:2025.1 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-01-08 01:20:20.354990 | orchestrator | 57f5dec2277f registry.osism.tech/kolla/keepalived:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-01-08 01:20:20.354997 | orchestrator | 2049e4c84202 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2026-01-08 01:20:20.355012 | orchestrator | 9121fa449c86 registry.osism.tech/kolla/proxysql:2025.1 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2026-01-08 01:20:20.355018 | orchestrator | 3c29532fa0bf registry.osism.tech/kolla/haproxy:2025.1 "dumb-init --single-…" 25 minutes ago Up 24 minutes (healthy) haproxy 2026-01-08 01:20:20.355024 | orchestrator | e6334837448b registry.osism.tech/kolla/ovn-northd:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2026-01-08 01:20:20.355031 | orchestrator | e694bf55efdd registry.osism.tech/kolla/rabbitmq:2025.1 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2026-01-08 01:20:20.355042 | orchestrator | f969df355267 registry.osism.tech/kolla/ovn-sb-db-relay:2025.1 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db_relay_1 2026-01-08 01:20:20.355048 | orchestrator | d3a15ecf298b registry.osism.tech/kolla/ovn-sb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 26 minutes ovn_sb_db 2026-01-08 01:20:20.355068 | orchestrator | 70f88eed408f registry.osism.tech/kolla/ovn-nb-db-server:2025.1 "dumb-init --single-…" 28 minutes ago Up 26 minutes ovn_nb_db 2026-01-08 01:20:20.355074 | orchestrator | 8a090315ac48 registry.osism.tech/kolla/ovn-controller:2025.1 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-01-08 01:20:20.355080 | orchestrator | d1f63bacaf74 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 29 minutes ago Up 29 minutes ceph-mon-testbed-node-2 2026-01-08 01:20:20.355086 | orchestrator | 54e810c628cc registry.osism.tech/kolla/openvswitch-vswitchd:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-01-08 01:20:20.355093 | orchestrator | d087ebaee2e8 registry.osism.tech/kolla/openvswitch-db-server:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-01-08 01:20:20.355099 | orchestrator | aa2056d37bb5 registry.osism.tech/kolla/redis-sentinel:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-01-08 01:20:20.355106 | orchestrator | a064ebe9cf05 registry.osism.tech/kolla/redis:2025.1 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-01-08 01:20:20.355112 | orchestrator | c1c164026fca registry.osism.tech/kolla/memcached:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) memcached 2026-01-08 01:20:20.355119 | orchestrator | 2bf4e01d1a2f registry.osism.tech/kolla/cron:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-01-08 01:20:20.355125 | orchestrator | 2ea68d76cc18 registry.osism.tech/kolla/kolla-toolbox:2025.1 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-01-08 01:20:20.355136 | orchestrator | 1a8fc9cb698f registry.osism.tech/kolla/fluentd:2025.1 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-01-08 01:20:20.692915 | orchestrator | 2026-01-08 01:20:20.692985 | orchestrator | ## Images @ testbed-node-2 2026-01-08 01:20:20.692999 | orchestrator | 2026-01-08 01:20:20.693009 | orchestrator | + echo 2026-01-08 01:20:20.693020 | orchestrator | + echo '## Images @ testbed-node-2' 2026-01-08 01:20:20.693031 | orchestrator | + echo 2026-01-08 01:20:20.693041 | orchestrator | + osism container testbed-node-2 images 2026-01-08 01:20:23.087119 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-08 01:20:23.087177 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 0c99c55b9df6 22 hours ago 1.27GB 2026-01-08 01:20:23.087185 | orchestrator | registry.osism.tech/kolla/haproxy 2025.1 b7e672e6889b 23 hours ago 279MB 2026-01-08 01:20:23.087193 | orchestrator | registry.osism.tech/kolla/opensearch 2025.1 cba5e0b41e63 23 hours ago 1.56GB 2026-01-08 01:20:23.087200 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2025.1 7560a51b902b 23 hours ago 1.53GB 2026-01-08 01:20:23.087206 | orchestrator | registry.osism.tech/kolla/keepalived 2025.1 560177faa422 23 hours ago 282MB 2026-01-08 01:20:23.087213 | orchestrator | registry.osism.tech/kolla/grafana 2025.1 38cdae39937a 23 hours ago 1.02GB 2026-01-08 01:20:23.087220 | orchestrator | registry.osism.tech/kolla/rabbitmq 2025.1 07811fa0309c 23 hours ago 344MB 2026-01-08 01:20:23.087241 | orchestrator | registry.osism.tech/kolla/cron 2025.1 303e16428a6b 23 hours ago 271MB 2026-01-08 01:20:23.087248 | orchestrator | registry.osism.tech/kolla/fluentd 2025.1 1b11dcf76817 23 hours ago 584MB 2026-01-08 01:20:23.087255 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2025.1 5c7aa618d82d 23 hours ago 679MB 2026-01-08 01:20:23.087261 | orchestrator | registry.osism.tech/kolla/memcached 2025.1 03feb8c06cbf 23 hours ago 272MB 2026-01-08 01:20:23.087268 | orchestrator | registry.osism.tech/kolla/proxysql 2025.1 fca6414d3e37 23 hours ago 417MB 2026-01-08 01:20:23.087275 | orchestrator | registry.osism.tech/kolla/horizon 2025.1 1dec30422405 23 hours ago 1.2GB 2026-01-08 01:20:23.087281 | orchestrator | registry.osism.tech/kolla/redis 2025.1 9c3c95d6a9b1 23 hours ago 278MB 2026-01-08 01:20:23.087287 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2025.1 b0a0b7625646 23 hours ago 278MB 2026-01-08 01:20:23.087294 | orchestrator | registry.osism.tech/kolla/mariadb-server 2025.1 4fd99ce5e261 23 hours ago 457MB 2026-01-08 01:20:23.087301 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2025.1 92280230149e 23 hours ago 287MB 2026-01-08 01:20:23.087308 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2025.1 700a10c15654 23 hours ago 287MB 2026-01-08 01:20:23.087314 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2025.1 c17c794182d5 23 hours ago 297MB 2026-01-08 01:20:23.087322 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2025.1 5ab59bd749db 23 hours ago 306MB 2026-01-08 01:20:23.087328 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2025.1 1ffbc208ca63 23 hours ago 304MB 2026-01-08 01:20:23.087335 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2025.1 c51c1d344aed 23 hours ago 311MB 2026-01-08 01:20:23.087342 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2025.1 f6347a1bd715 23 hours ago 363MB 2026-01-08 01:20:23.087349 | orchestrator | registry.osism.tech/kolla/nova-api 2025.1 9b55b40f70ed 23 hours ago 1.23GB 2026-01-08 01:20:23.087355 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2025.1 ff7dcbda8159 23 hours ago 1.39GB 2026-01-08 01:20:23.087362 | orchestrator | registry.osism.tech/kolla/nova-conductor 2025.1 1d6c6d828718 23 hours ago 1.23GB 2026-01-08 01:20:23.087378 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2025.1 485725f63810 23 hours ago 1.23GB 2026-01-08 01:20:23.087385 | orchestrator | registry.osism.tech/kolla/designate-worker 2025.1 66747836589f 23 hours ago 1.01GB 2026-01-08 01:20:23.087391 | orchestrator | registry.osism.tech/kolla/designate-producer 2025.1 76480022c96e 23 hours ago 1GB 2026-01-08 01:20:23.087398 | orchestrator | registry.osism.tech/kolla/designate-mdns 2025.1 b22f41166ad2 23 hours ago 1GB 2026-01-08 01:20:23.087405 | orchestrator | registry.osism.tech/kolla/designate-central 2025.1 8531051f945f 23 hours ago 1e+03MB 2026-01-08 01:20:23.087411 | orchestrator | registry.osism.tech/kolla/designate-api 2025.1 a4af0d4c4ae7 23 hours ago 1GB 2026-01-08 01:20:23.087418 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2025.1 703a4f87f3a0 23 hours ago 1.01GB 2026-01-08 01:20:23.087424 | orchestrator | registry.osism.tech/kolla/magnum-api 2025.1 0ad03b27ec5b 24 hours ago 1.15GB 2026-01-08 01:20:23.087431 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2025.1 7e0bc3b8ae4b 24 hours ago 1.26GB 2026-01-08 01:20:23.087438 | orchestrator | registry.osism.tech/kolla/octavia-api 2025.1 af35e31a7880 24 hours ago 1.07GB 2026-01-08 01:20:23.087460 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2025.1 2035e610a14b 24 hours ago 1.05GB 2026-01-08 01:20:23.087467 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2025.1 2b74439df4fc 24 hours ago 1.07GB 2026-01-08 01:20:23.087474 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2025.1 59b8f29d4a41 24 hours ago 1.05GB 2026-01-08 01:20:23.087480 | orchestrator | registry.osism.tech/kolla/octavia-worker 2025.1 157a2d695be1 24 hours ago 1.05GB 2026-01-08 01:20:23.087486 | orchestrator | registry.osism.tech/kolla/cinder-volume 2025.1 d7714d51292d 24 hours ago 1.79GB 2026-01-08 01:20:23.087493 | orchestrator | registry.osism.tech/kolla/cinder-api 2025.1 760d9f19d35f 24 hours ago 1.43GB 2026-01-08 01:20:23.087500 | orchestrator | registry.osism.tech/kolla/cinder-backup 2025.1 54775c88e976 24 hours ago 1.44GB 2026-01-08 01:20:23.087506 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2025.1 94e908490f52 24 hours ago 1.43GB 2026-01-08 01:20:23.087513 | orchestrator | registry.osism.tech/kolla/placement-api 2025.1 b273758bdd3e 24 hours ago 992MB 2026-01-08 01:20:23.087519 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2025.1 9d5e54e8f082 24 hours ago 1.05GB 2026-01-08 01:20:23.087526 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2025.1 29e1b1df5c70 24 hours ago 1.05GB 2026-01-08 01:20:23.087532 | orchestrator | registry.osism.tech/kolla/keystone 2025.1 22484a9544f5 24 hours ago 1.1GB 2026-01-08 01:20:23.087539 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2025.1 a4b841768b0a 24 hours ago 1GB 2026-01-08 01:20:23.087545 | orchestrator | registry.osism.tech/kolla/barbican-worker 2025.1 4e5fff0ee992 24 hours ago 1GB 2026-01-08 01:20:23.087552 | orchestrator | registry.osism.tech/kolla/barbican-api 2025.1 9ab1182c7d0c 24 hours ago 1e+03MB 2026-01-08 01:20:23.087559 | orchestrator | registry.osism.tech/kolla/neutron-server 2025.1 c36c58dd5b6b 24 hours ago 1.23GB 2026-01-08 01:20:23.087568 | orchestrator | registry.osism.tech/kolla/glance-api 2025.1 3a6e0322a87a 24 hours ago 1.12GB 2026-01-08 01:20:23.087575 | orchestrator | registry.osism.tech/kolla/ovn-controller 2025.1 b1de780e3d39 24 hours ago 295MB 2026-01-08 01:20:23.087582 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2025.1 a3aed6cad525 24 hours ago 295MB 2026-01-08 01:20:23.087589 | orchestrator | registry.osism.tech/kolla/ovn-northd 2025.1 7c62f5c8f694 24 hours ago 295MB 2026-01-08 01:20:23.087595 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2025.1 242c25060037 24 hours ago 295MB 2026-01-08 01:20:23.087602 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-relay 2025.1 ef84fca3f266 24 hours ago 295MB 2026-01-08 01:20:23.415124 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-01-08 01:20:23.425413 | orchestrator | + set -e 2026-01-08 01:20:23.425472 | orchestrator | + source /opt/manager-vars.sh 2026-01-08 01:20:23.426708 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-08 01:20:23.426899 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-08 01:20:23.426927 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-08 01:20:23.426949 | orchestrator | ++ CEPH_VERSION=reef 2026-01-08 01:20:23.426970 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-08 01:20:23.426990 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-08 01:20:23.427010 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-08 01:20:23.427028 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-08 01:20:23.427045 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-08 01:20:23.427068 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-08 01:20:23.427095 | orchestrator | ++ export ARA=false 2026-01-08 01:20:23.427115 | orchestrator | ++ ARA=false 2026-01-08 01:20:23.427134 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-08 01:20:23.427151 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-08 01:20:23.427166 | orchestrator | ++ export TEMPEST=true 2026-01-08 01:20:23.427182 | orchestrator | ++ TEMPEST=true 2026-01-08 01:20:23.427229 | orchestrator | ++ export IS_ZUUL=true 2026-01-08 01:20:23.427250 | orchestrator | ++ IS_ZUUL=true 2026-01-08 01:20:23.427268 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 01:20:23.427286 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 01:20:23.427298 | orchestrator | ++ export EXTERNAL_API=false 2026-01-08 01:20:23.427310 | orchestrator | ++ EXTERNAL_API=false 2026-01-08 01:20:23.427329 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-08 01:20:23.427359 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-08 01:20:23.427378 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-08 01:20:23.427397 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-08 01:20:23.427416 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-08 01:20:23.427434 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-08 01:20:23.427453 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-08 01:20:23.427482 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-01-08 01:20:23.435510 | orchestrator | + set -e 2026-01-08 01:20:23.435577 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-08 01:20:23.435589 | orchestrator | ++ export INTERACTIVE=false 2026-01-08 01:20:23.435601 | orchestrator | ++ INTERACTIVE=false 2026-01-08 01:20:23.435611 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-08 01:20:23.435621 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-08 01:20:23.435630 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-08 01:20:23.435981 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-08 01:20:23.440571 | orchestrator | 2026-01-08 01:20:23.440655 | orchestrator | # Ceph status 2026-01-08 01:20:23.440674 | orchestrator | 2026-01-08 01:20:23.440692 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-08 01:20:23.440766 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-08 01:20:23.440813 | orchestrator | + echo 2026-01-08 01:20:23.440839 | orchestrator | + echo '# Ceph status' 2026-01-08 01:20:23.440849 | orchestrator | + echo 2026-01-08 01:20:23.440859 | orchestrator | + ceph -s 2026-01-08 01:20:24.018424 | orchestrator | cluster: 2026-01-08 01:20:24.018492 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-01-08 01:20:24.018503 | orchestrator | health: HEALTH_OK 2026-01-08 01:20:24.018512 | orchestrator | 2026-01-08 01:20:24.018519 | orchestrator | services: 2026-01-08 01:20:24.018526 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 29m) 2026-01-08 01:20:24.018533 | orchestrator | mgr: testbed-node-0(active, since 16m), standbys: testbed-node-1, testbed-node-2 2026-01-08 01:20:24.018540 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-01-08 01:20:24.018547 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 26m) 2026-01-08 01:20:24.018553 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-01-08 01:20:24.018560 | orchestrator | 2026-01-08 01:20:24.018567 | orchestrator | data: 2026-01-08 01:20:24.018575 | orchestrator | volumes: 1/1 healthy 2026-01-08 01:20:24.018580 | orchestrator | pools: 14 pools, 417 pgs 2026-01-08 01:20:24.018584 | orchestrator | objects: 556 objects, 2.2 GiB 2026-01-08 01:20:24.018589 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-01-08 01:20:24.018593 | orchestrator | pgs: 417 active+clean 2026-01-08 01:20:24.018598 | orchestrator | 2026-01-08 01:20:24.061484 | orchestrator | 2026-01-08 01:20:24.061571 | orchestrator | # Ceph versions 2026-01-08 01:20:24.061583 | orchestrator | 2026-01-08 01:20:24.061590 | orchestrator | + echo 2026-01-08 01:20:24.061596 | orchestrator | + echo '# Ceph versions' 2026-01-08 01:20:24.061604 | orchestrator | + echo 2026-01-08 01:20:24.061610 | orchestrator | + ceph versions 2026-01-08 01:20:24.654405 | orchestrator | { 2026-01-08 01:20:24.654456 | orchestrator | "mon": { 2026-01-08 01:20:24.654461 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-08 01:20:24.654466 | orchestrator | }, 2026-01-08 01:20:24.654470 | orchestrator | "mgr": { 2026-01-08 01:20:24.654475 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-08 01:20:24.654479 | orchestrator | }, 2026-01-08 01:20:24.654482 | orchestrator | "osd": { 2026-01-08 01:20:24.654487 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-01-08 01:20:24.654490 | orchestrator | }, 2026-01-08 01:20:24.654494 | orchestrator | "mds": { 2026-01-08 01:20:24.654498 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-08 01:20:24.654502 | orchestrator | }, 2026-01-08 01:20:24.654506 | orchestrator | "rgw": { 2026-01-08 01:20:24.654510 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-08 01:20:24.654525 | orchestrator | }, 2026-01-08 01:20:24.654529 | orchestrator | "overall": { 2026-01-08 01:20:24.654534 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-01-08 01:20:24.654537 | orchestrator | } 2026-01-08 01:20:24.654541 | orchestrator | } 2026-01-08 01:20:24.701311 | orchestrator | 2026-01-08 01:20:24.701370 | orchestrator | # Ceph OSD tree 2026-01-08 01:20:24.701378 | orchestrator | 2026-01-08 01:20:24.701386 | orchestrator | + echo 2026-01-08 01:20:24.701393 | orchestrator | + echo '# Ceph OSD tree' 2026-01-08 01:20:24.701400 | orchestrator | + echo 2026-01-08 01:20:24.701407 | orchestrator | + ceph osd df tree 2026-01-08 01:20:25.233620 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-01-08 01:20:25.233691 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 425 MiB 113 GiB 5.91 1.00 - root default 2026-01-08 01:20:25.233699 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-01-08 01:20:25.233706 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 7.86 1.33 198 up osd.0 2026-01-08 01:20:25.233765 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 808 MiB 739 MiB 1 KiB 70 MiB 19 GiB 3.95 0.67 206 up osd.5 2026-01-08 01:20:25.233773 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-01-08 01:20:25.233779 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 949 MiB 875 MiB 1 KiB 74 MiB 19 GiB 4.64 0.78 184 up osd.1 2026-01-08 01:20:25.233786 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.20 1.22 224 up osd.3 2026-01-08 01:20:25.233793 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-01-08 01:20:25.233799 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 18 GiB 7.47 1.26 201 up osd.2 2026-01-08 01:20:25.233806 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 892 MiB 819 MiB 1 KiB 74 MiB 19 GiB 4.36 0.74 205 up osd.4 2026-01-08 01:20:25.233812 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 425 MiB 113 GiB 5.91 2026-01-08 01:20:25.233819 | orchestrator | MIN/MAX VAR: 0.67/1.33 STDDEV: 1.62 2026-01-08 01:20:25.282441 | orchestrator | 2026-01-08 01:20:25.282502 | orchestrator | # Ceph monitor status 2026-01-08 01:20:25.282511 | orchestrator | 2026-01-08 01:20:25.282518 | orchestrator | + echo 2026-01-08 01:20:25.282525 | orchestrator | + echo '# Ceph monitor status' 2026-01-08 01:20:25.282532 | orchestrator | + echo 2026-01-08 01:20:25.282538 | orchestrator | + ceph mon stat 2026-01-08 01:20:25.862071 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-01-08 01:20:25.916514 | orchestrator | 2026-01-08 01:20:25.916596 | orchestrator | # Ceph quorum status 2026-01-08 01:20:25.916607 | orchestrator | 2026-01-08 01:20:25.916615 | orchestrator | + echo 2026-01-08 01:20:25.916622 | orchestrator | + echo '# Ceph quorum status' 2026-01-08 01:20:25.916628 | orchestrator | + echo 2026-01-08 01:20:25.916634 | orchestrator | + ceph quorum_status 2026-01-08 01:20:25.916640 | orchestrator | + jq 2026-01-08 01:20:26.558142 | orchestrator | { 2026-01-08 01:20:26.558236 | orchestrator | "election_epoch": 8, 2026-01-08 01:20:26.558245 | orchestrator | "quorum": [ 2026-01-08 01:20:26.558250 | orchestrator | 0, 2026-01-08 01:20:26.558254 | orchestrator | 1, 2026-01-08 01:20:26.558258 | orchestrator | 2 2026-01-08 01:20:26.558262 | orchestrator | ], 2026-01-08 01:20:26.558266 | orchestrator | "quorum_names": [ 2026-01-08 01:20:26.558270 | orchestrator | "testbed-node-0", 2026-01-08 01:20:26.558274 | orchestrator | "testbed-node-1", 2026-01-08 01:20:26.558278 | orchestrator | "testbed-node-2" 2026-01-08 01:20:26.558282 | orchestrator | ], 2026-01-08 01:20:26.558304 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-01-08 01:20:26.558309 | orchestrator | "quorum_age": 1766, 2026-01-08 01:20:26.558313 | orchestrator | "features": { 2026-01-08 01:20:26.558317 | orchestrator | "quorum_con": "4540138322906710015", 2026-01-08 01:20:26.558321 | orchestrator | "quorum_mon": [ 2026-01-08 01:20:26.558325 | orchestrator | "kraken", 2026-01-08 01:20:26.558328 | orchestrator | "luminous", 2026-01-08 01:20:26.558332 | orchestrator | "mimic", 2026-01-08 01:20:26.558336 | orchestrator | "osdmap-prune", 2026-01-08 01:20:26.558340 | orchestrator | "nautilus", 2026-01-08 01:20:26.558344 | orchestrator | "octopus", 2026-01-08 01:20:26.558347 | orchestrator | "pacific", 2026-01-08 01:20:26.558351 | orchestrator | "elector-pinging", 2026-01-08 01:20:26.558355 | orchestrator | "quincy", 2026-01-08 01:20:26.558359 | orchestrator | "reef" 2026-01-08 01:20:26.558363 | orchestrator | ] 2026-01-08 01:20:26.558366 | orchestrator | }, 2026-01-08 01:20:26.558370 | orchestrator | "monmap": { 2026-01-08 01:20:26.558374 | orchestrator | "epoch": 1, 2026-01-08 01:20:26.558378 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-01-08 01:20:26.558383 | orchestrator | "modified": "2026-01-08T00:50:42.980017Z", 2026-01-08 01:20:26.558387 | orchestrator | "created": "2026-01-08T00:50:42.980017Z", 2026-01-08 01:20:26.558391 | orchestrator | "min_mon_release": 18, 2026-01-08 01:20:26.558395 | orchestrator | "min_mon_release_name": "reef", 2026-01-08 01:20:26.558399 | orchestrator | "election_strategy": 1, 2026-01-08 01:20:26.558403 | orchestrator | "disallowed_leaders: ": "", 2026-01-08 01:20:26.558408 | orchestrator | "stretch_mode": false, 2026-01-08 01:20:26.558415 | orchestrator | "tiebreaker_mon": "", 2026-01-08 01:20:26.558421 | orchestrator | "removed_ranks: ": "", 2026-01-08 01:20:26.558427 | orchestrator | "features": { 2026-01-08 01:20:26.558432 | orchestrator | "persistent": [ 2026-01-08 01:20:26.558440 | orchestrator | "kraken", 2026-01-08 01:20:26.558449 | orchestrator | "luminous", 2026-01-08 01:20:26.558455 | orchestrator | "mimic", 2026-01-08 01:20:26.558460 | orchestrator | "osdmap-prune", 2026-01-08 01:20:26.558466 | orchestrator | "nautilus", 2026-01-08 01:20:26.558472 | orchestrator | "octopus", 2026-01-08 01:20:26.558478 | orchestrator | "pacific", 2026-01-08 01:20:26.558485 | orchestrator | "elector-pinging", 2026-01-08 01:20:26.558491 | orchestrator | "quincy", 2026-01-08 01:20:26.558497 | orchestrator | "reef" 2026-01-08 01:20:26.558515 | orchestrator | ], 2026-01-08 01:20:26.558527 | orchestrator | "optional": [] 2026-01-08 01:20:26.558531 | orchestrator | }, 2026-01-08 01:20:26.558535 | orchestrator | "mons": [ 2026-01-08 01:20:26.558539 | orchestrator | { 2026-01-08 01:20:26.558543 | orchestrator | "rank": 0, 2026-01-08 01:20:26.558547 | orchestrator | "name": "testbed-node-0", 2026-01-08 01:20:26.558550 | orchestrator | "public_addrs": { 2026-01-08 01:20:26.558554 | orchestrator | "addrvec": [ 2026-01-08 01:20:26.558558 | orchestrator | { 2026-01-08 01:20:26.558562 | orchestrator | "type": "v2", 2026-01-08 01:20:26.558565 | orchestrator | "addr": "192.168.16.10:3300", 2026-01-08 01:20:26.558569 | orchestrator | "nonce": 0 2026-01-08 01:20:26.558573 | orchestrator | }, 2026-01-08 01:20:26.558577 | orchestrator | { 2026-01-08 01:20:26.558580 | orchestrator | "type": "v1", 2026-01-08 01:20:26.558584 | orchestrator | "addr": "192.168.16.10:6789", 2026-01-08 01:20:26.558588 | orchestrator | "nonce": 0 2026-01-08 01:20:26.558592 | orchestrator | } 2026-01-08 01:20:26.558595 | orchestrator | ] 2026-01-08 01:20:26.558599 | orchestrator | }, 2026-01-08 01:20:26.558603 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-01-08 01:20:26.558607 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-01-08 01:20:26.558610 | orchestrator | "priority": 0, 2026-01-08 01:20:26.558614 | orchestrator | "weight": 0, 2026-01-08 01:20:26.558618 | orchestrator | "crush_location": "{}" 2026-01-08 01:20:26.558622 | orchestrator | }, 2026-01-08 01:20:26.558625 | orchestrator | { 2026-01-08 01:20:26.558629 | orchestrator | "rank": 1, 2026-01-08 01:20:26.558633 | orchestrator | "name": "testbed-node-1", 2026-01-08 01:20:26.558637 | orchestrator | "public_addrs": { 2026-01-08 01:20:26.558640 | orchestrator | "addrvec": [ 2026-01-08 01:20:26.558644 | orchestrator | { 2026-01-08 01:20:26.558648 | orchestrator | "type": "v2", 2026-01-08 01:20:26.558652 | orchestrator | "addr": "192.168.16.11:3300", 2026-01-08 01:20:26.558656 | orchestrator | "nonce": 0 2026-01-08 01:20:26.558661 | orchestrator | }, 2026-01-08 01:20:26.558665 | orchestrator | { 2026-01-08 01:20:26.558676 | orchestrator | "type": "v1", 2026-01-08 01:20:26.558682 | orchestrator | "addr": "192.168.16.11:6789", 2026-01-08 01:20:26.558688 | orchestrator | "nonce": 0 2026-01-08 01:20:26.558697 | orchestrator | } 2026-01-08 01:20:26.558705 | orchestrator | ] 2026-01-08 01:20:26.558772 | orchestrator | }, 2026-01-08 01:20:26.558779 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-01-08 01:20:26.558786 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-01-08 01:20:26.558842 | orchestrator | "priority": 0, 2026-01-08 01:20:26.558848 | orchestrator | "weight": 0, 2026-01-08 01:20:26.558854 | orchestrator | "crush_location": "{}" 2026-01-08 01:20:26.558860 | orchestrator | }, 2026-01-08 01:20:26.558866 | orchestrator | { 2026-01-08 01:20:26.558872 | orchestrator | "rank": 2, 2026-01-08 01:20:26.558879 | orchestrator | "name": "testbed-node-2", 2026-01-08 01:20:26.558885 | orchestrator | "public_addrs": { 2026-01-08 01:20:26.558890 | orchestrator | "addrvec": [ 2026-01-08 01:20:26.558897 | orchestrator | { 2026-01-08 01:20:26.558903 | orchestrator | "type": "v2", 2026-01-08 01:20:26.558909 | orchestrator | "addr": "192.168.16.12:3300", 2026-01-08 01:20:26.558916 | orchestrator | "nonce": 0 2026-01-08 01:20:26.558922 | orchestrator | }, 2026-01-08 01:20:26.558928 | orchestrator | { 2026-01-08 01:20:26.558935 | orchestrator | "type": "v1", 2026-01-08 01:20:26.558941 | orchestrator | "addr": "192.168.16.12:6789", 2026-01-08 01:20:26.558947 | orchestrator | "nonce": 0 2026-01-08 01:20:26.558954 | orchestrator | } 2026-01-08 01:20:26.558960 | orchestrator | ] 2026-01-08 01:20:26.558967 | orchestrator | }, 2026-01-08 01:20:26.558971 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-01-08 01:20:26.558976 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-01-08 01:20:26.558980 | orchestrator | "priority": 0, 2026-01-08 01:20:26.558986 | orchestrator | "weight": 0, 2026-01-08 01:20:26.558990 | orchestrator | "crush_location": "{}" 2026-01-08 01:20:26.558995 | orchestrator | } 2026-01-08 01:20:26.558999 | orchestrator | ] 2026-01-08 01:20:26.559003 | orchestrator | } 2026-01-08 01:20:26.559008 | orchestrator | } 2026-01-08 01:20:26.559110 | orchestrator | 2026-01-08 01:20:26.559121 | orchestrator | # Ceph free space status 2026-01-08 01:20:26.559127 | orchestrator | 2026-01-08 01:20:26.559133 | orchestrator | + echo 2026-01-08 01:20:26.559139 | orchestrator | + echo '# Ceph free space status' 2026-01-08 01:20:26.559145 | orchestrator | + echo 2026-01-08 01:20:26.559151 | orchestrator | + ceph df 2026-01-08 01:20:27.176620 | orchestrator | --- RAW STORAGE --- 2026-01-08 01:20:27.176671 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-01-08 01:20:27.176684 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-01-08 01:20:27.176689 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-01-08 01:20:27.176693 | orchestrator | 2026-01-08 01:20:27.176698 | orchestrator | --- POOLS --- 2026-01-08 01:20:27.176702 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-01-08 01:20:27.176706 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-01-08 01:20:27.176764 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-01-08 01:20:27.176774 | orchestrator | cephfs_metadata 3 32 4.4 KiB 22 96 KiB 0 35 GiB 2026-01-08 01:20:27.176783 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-01-08 01:20:27.176789 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-01-08 01:20:27.176795 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-01-08 01:20:27.176801 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-01-08 01:20:27.176807 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-01-08 01:20:27.176813 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-01-08 01:20:27.176820 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-01-08 01:20:27.176826 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-01-08 01:20:27.176832 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.99 35 GiB 2026-01-08 01:20:27.176839 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-01-08 01:20:27.176861 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-01-08 01:20:27.221263 | orchestrator | ++ semver latest 5.0.0 2026-01-08 01:20:27.281363 | orchestrator | + [[ -1 -eq -1 ]] 2026-01-08 01:20:27.281422 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-01-08 01:20:27.281428 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-01-08 01:20:27.281433 | orchestrator | + osism apply facts 2026-01-08 01:20:29.354990 | orchestrator | 2026-01-08 01:20:29 | INFO  | Task a2b60b28-2561-4e59-8bcf-049751ad976e (facts) was prepared for execution. 2026-01-08 01:20:29.355067 | orchestrator | 2026-01-08 01:20:29 | INFO  | It takes a moment until task a2b60b28-2561-4e59-8bcf-049751ad976e (facts) has been started and output is visible here. 2026-01-08 01:20:42.326555 | orchestrator | 2026-01-08 01:20:42.326651 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-08 01:20:42.326662 | orchestrator | 2026-01-08 01:20:42.326669 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-08 01:20:42.326675 | orchestrator | Thursday 08 January 2026 01:20:33 +0000 (0:00:00.289) 0:00:00.289 ****** 2026-01-08 01:20:42.326682 | orchestrator | ok: [testbed-manager] 2026-01-08 01:20:42.326689 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:20:42.326695 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:20:42.326734 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:20:42.326740 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:20:42.326745 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:20:42.326752 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:20:42.326757 | orchestrator | 2026-01-08 01:20:42.326763 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-08 01:20:42.326769 | orchestrator | Thursday 08 January 2026 01:20:35 +0000 (0:00:01.547) 0:00:01.837 ****** 2026-01-08 01:20:42.326775 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:20:42.326783 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:20:42.326790 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:20:42.326796 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:20:42.326803 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:20:42.326810 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:20:42.326817 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:20:42.326824 | orchestrator | 2026-01-08 01:20:42.326831 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-08 01:20:42.326837 | orchestrator | 2026-01-08 01:20:42.326843 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-08 01:20:42.326849 | orchestrator | Thursday 08 January 2026 01:20:36 +0000 (0:00:01.383) 0:00:03.220 ****** 2026-01-08 01:20:42.326855 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:20:42.326862 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:20:42.326868 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:20:42.326874 | orchestrator | ok: [testbed-manager] 2026-01-08 01:20:42.326881 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:20:42.326887 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:20:42.326893 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:20:42.326899 | orchestrator | 2026-01-08 01:20:42.326905 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-08 01:20:42.326912 | orchestrator | 2026-01-08 01:20:42.326920 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-08 01:20:42.326927 | orchestrator | Thursday 08 January 2026 01:20:41 +0000 (0:00:04.508) 0:00:07.729 ****** 2026-01-08 01:20:42.326933 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:20:42.326939 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:20:42.326945 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:20:42.326952 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:20:42.326958 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:20:42.326964 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:20:42.326971 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:20:42.326977 | orchestrator | 2026-01-08 01:20:42.326984 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:20:42.327017 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:20:42.327025 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:20:42.327031 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:20:42.327037 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:20:42.327043 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:20:42.327048 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:20:42.327054 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:20:42.327059 | orchestrator | 2026-01-08 01:20:42.327065 | orchestrator | 2026-01-08 01:20:42.327071 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:20:42.327076 | orchestrator | Thursday 08 January 2026 01:20:41 +0000 (0:00:00.578) 0:00:08.307 ****** 2026-01-08 01:20:42.327096 | orchestrator | =============================================================================== 2026-01-08 01:20:42.327102 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.51s 2026-01-08 01:20:42.327108 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.55s 2026-01-08 01:20:42.327114 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.38s 2026-01-08 01:20:42.327120 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-01-08 01:20:42.667157 | orchestrator | + osism validate ceph-mons 2026-01-08 01:21:16.070550 | orchestrator | 2026-01-08 01:21:16.070628 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-01-08 01:21:16.070637 | orchestrator | 2026-01-08 01:21:16.070644 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-08 01:21:16.070650 | orchestrator | Thursday 08 January 2026 01:20:59 +0000 (0:00:00.440) 0:00:00.440 ****** 2026-01-08 01:21:16.070658 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:16.070664 | orchestrator | 2026-01-08 01:21:16.070685 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-08 01:21:16.070692 | orchestrator | Thursday 08 January 2026 01:21:00 +0000 (0:00:00.909) 0:00:01.349 ****** 2026-01-08 01:21:16.070698 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:16.070704 | orchestrator | 2026-01-08 01:21:16.070711 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-08 01:21:16.070717 | orchestrator | Thursday 08 January 2026 01:21:01 +0000 (0:00:00.998) 0:00:02.348 ****** 2026-01-08 01:21:16.070723 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.070730 | orchestrator | 2026-01-08 01:21:16.070736 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-08 01:21:16.070742 | orchestrator | Thursday 08 January 2026 01:21:01 +0000 (0:00:00.138) 0:00:02.486 ****** 2026-01-08 01:21:16.070749 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.070755 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:16.070761 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:16.070767 | orchestrator | 2026-01-08 01:21:16.070773 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-08 01:21:16.070779 | orchestrator | Thursday 08 January 2026 01:21:01 +0000 (0:00:00.294) 0:00:02.780 ****** 2026-01-08 01:21:16.070785 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:16.070805 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:16.070811 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.070818 | orchestrator | 2026-01-08 01:21:16.070824 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-08 01:21:16.070830 | orchestrator | Thursday 08 January 2026 01:21:03 +0000 (0:00:01.162) 0:00:03.943 ****** 2026-01-08 01:21:16.070836 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.070842 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:21:16.070848 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:21:16.070854 | orchestrator | 2026-01-08 01:21:16.070861 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-08 01:21:16.070867 | orchestrator | Thursday 08 January 2026 01:21:03 +0000 (0:00:00.303) 0:00:04.247 ****** 2026-01-08 01:21:16.070878 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.070884 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:16.070891 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:16.070897 | orchestrator | 2026-01-08 01:21:16.070912 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-08 01:21:16.070918 | orchestrator | Thursday 08 January 2026 01:21:03 +0000 (0:00:00.529) 0:00:04.776 ****** 2026-01-08 01:21:16.070929 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.070936 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:16.070941 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:16.070948 | orchestrator | 2026-01-08 01:21:16.070954 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-01-08 01:21:16.070960 | orchestrator | Thursday 08 January 2026 01:21:04 +0000 (0:00:00.332) 0:00:05.109 ****** 2026-01-08 01:21:16.070966 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.070972 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:21:16.070978 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:21:16.070984 | orchestrator | 2026-01-08 01:21:16.070990 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-01-08 01:21:16.070996 | orchestrator | Thursday 08 January 2026 01:21:04 +0000 (0:00:00.361) 0:00:05.470 ****** 2026-01-08 01:21:16.071002 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.071008 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:16.071014 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:16.071020 | orchestrator | 2026-01-08 01:21:16.071027 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-08 01:21:16.071033 | orchestrator | Thursday 08 January 2026 01:21:05 +0000 (0:00:00.501) 0:00:05.971 ****** 2026-01-08 01:21:16.071040 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071046 | orchestrator | 2026-01-08 01:21:16.071052 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-08 01:21:16.071058 | orchestrator | Thursday 08 January 2026 01:21:05 +0000 (0:00:00.275) 0:00:06.247 ****** 2026-01-08 01:21:16.071064 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071070 | orchestrator | 2026-01-08 01:21:16.071076 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-08 01:21:16.071082 | orchestrator | Thursday 08 January 2026 01:21:05 +0000 (0:00:00.265) 0:00:06.512 ****** 2026-01-08 01:21:16.071088 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071094 | orchestrator | 2026-01-08 01:21:16.071100 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:16.071106 | orchestrator | Thursday 08 January 2026 01:21:05 +0000 (0:00:00.268) 0:00:06.780 ****** 2026-01-08 01:21:16.071113 | orchestrator | 2026-01-08 01:21:16.071119 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:16.071124 | orchestrator | Thursday 08 January 2026 01:21:05 +0000 (0:00:00.075) 0:00:06.855 ****** 2026-01-08 01:21:16.071131 | orchestrator | 2026-01-08 01:21:16.071137 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:16.071143 | orchestrator | Thursday 08 January 2026 01:21:06 +0000 (0:00:00.078) 0:00:06.934 ****** 2026-01-08 01:21:16.071149 | orchestrator | 2026-01-08 01:21:16.071156 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-08 01:21:16.071166 | orchestrator | Thursday 08 January 2026 01:21:06 +0000 (0:00:00.080) 0:00:07.015 ****** 2026-01-08 01:21:16.071172 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071178 | orchestrator | 2026-01-08 01:21:16.071185 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-08 01:21:16.071192 | orchestrator | Thursday 08 January 2026 01:21:06 +0000 (0:00:00.256) 0:00:07.272 ****** 2026-01-08 01:21:16.071198 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071204 | orchestrator | 2026-01-08 01:21:16.071224 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-01-08 01:21:16.071230 | orchestrator | Thursday 08 January 2026 01:21:06 +0000 (0:00:00.247) 0:00:07.519 ****** 2026-01-08 01:21:16.071236 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.071243 | orchestrator | 2026-01-08 01:21:16.071249 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-01-08 01:21:16.071255 | orchestrator | Thursday 08 January 2026 01:21:06 +0000 (0:00:00.115) 0:00:07.635 ****** 2026-01-08 01:21:16.071261 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:21:16.071268 | orchestrator | 2026-01-08 01:21:16.071274 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-01-08 01:21:16.071280 | orchestrator | Thursday 08 January 2026 01:21:08 +0000 (0:00:01.737) 0:00:09.373 ****** 2026-01-08 01:21:16.071286 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.071293 | orchestrator | 2026-01-08 01:21:16.071300 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-01-08 01:21:16.071306 | orchestrator | Thursday 08 January 2026 01:21:09 +0000 (0:00:00.512) 0:00:09.886 ****** 2026-01-08 01:21:16.071312 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071318 | orchestrator | 2026-01-08 01:21:16.071324 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-01-08 01:21:16.071330 | orchestrator | Thursday 08 January 2026 01:21:09 +0000 (0:00:00.120) 0:00:10.006 ****** 2026-01-08 01:21:16.071336 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.071343 | orchestrator | 2026-01-08 01:21:16.071349 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-01-08 01:21:16.071355 | orchestrator | Thursday 08 January 2026 01:21:09 +0000 (0:00:00.342) 0:00:10.348 ****** 2026-01-08 01:21:16.071361 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.071367 | orchestrator | 2026-01-08 01:21:16.071374 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-01-08 01:21:16.071380 | orchestrator | Thursday 08 January 2026 01:21:09 +0000 (0:00:00.317) 0:00:10.665 ****** 2026-01-08 01:21:16.071394 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071401 | orchestrator | 2026-01-08 01:21:16.071407 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-01-08 01:21:16.071413 | orchestrator | Thursday 08 January 2026 01:21:09 +0000 (0:00:00.132) 0:00:10.798 ****** 2026-01-08 01:21:16.071419 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.071425 | orchestrator | 2026-01-08 01:21:16.071431 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-01-08 01:21:16.071438 | orchestrator | Thursday 08 January 2026 01:21:10 +0000 (0:00:00.119) 0:00:10.918 ****** 2026-01-08 01:21:16.071444 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.071450 | orchestrator | 2026-01-08 01:21:16.071456 | orchestrator | TASK [Gather status data] ****************************************************** 2026-01-08 01:21:16.071462 | orchestrator | Thursday 08 January 2026 01:21:10 +0000 (0:00:00.128) 0:00:11.046 ****** 2026-01-08 01:21:16.071468 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:21:16.071474 | orchestrator | 2026-01-08 01:21:16.071481 | orchestrator | TASK [Set health test data] **************************************************** 2026-01-08 01:21:16.071487 | orchestrator | Thursday 08 January 2026 01:21:11 +0000 (0:00:01.681) 0:00:12.727 ****** 2026-01-08 01:21:16.071493 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.071499 | orchestrator | 2026-01-08 01:21:16.071505 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-01-08 01:21:16.071566 | orchestrator | Thursday 08 January 2026 01:21:12 +0000 (0:00:00.347) 0:00:13.074 ****** 2026-01-08 01:21:16.071573 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071577 | orchestrator | 2026-01-08 01:21:16.071581 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-01-08 01:21:16.071584 | orchestrator | Thursday 08 January 2026 01:21:12 +0000 (0:00:00.131) 0:00:13.206 ****** 2026-01-08 01:21:16.071588 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:16.071592 | orchestrator | 2026-01-08 01:21:16.071596 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-01-08 01:21:16.071599 | orchestrator | Thursday 08 January 2026 01:21:12 +0000 (0:00:00.155) 0:00:13.362 ****** 2026-01-08 01:21:16.071603 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071607 | orchestrator | 2026-01-08 01:21:16.071611 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-01-08 01:21:16.071615 | orchestrator | Thursday 08 January 2026 01:21:12 +0000 (0:00:00.335) 0:00:13.697 ****** 2026-01-08 01:21:16.071619 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071625 | orchestrator | 2026-01-08 01:21:16.071629 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-08 01:21:16.071633 | orchestrator | Thursday 08 January 2026 01:21:12 +0000 (0:00:00.147) 0:00:13.845 ****** 2026-01-08 01:21:16.071637 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:16.071641 | orchestrator | 2026-01-08 01:21:16.071645 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-08 01:21:16.071648 | orchestrator | Thursday 08 January 2026 01:21:13 +0000 (0:00:00.298) 0:00:14.144 ****** 2026-01-08 01:21:16.071652 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:16.071656 | orchestrator | 2026-01-08 01:21:16.071660 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-08 01:21:16.071664 | orchestrator | Thursday 08 January 2026 01:21:13 +0000 (0:00:00.257) 0:00:14.401 ****** 2026-01-08 01:21:16.071667 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:16.071710 | orchestrator | 2026-01-08 01:21:16.071714 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-08 01:21:16.071722 | orchestrator | Thursday 08 January 2026 01:21:15 +0000 (0:00:01.742) 0:00:16.143 ****** 2026-01-08 01:21:16.071726 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:16.071730 | orchestrator | 2026-01-08 01:21:16.071734 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-08 01:21:16.071738 | orchestrator | Thursday 08 January 2026 01:21:15 +0000 (0:00:00.284) 0:00:16.428 ****** 2026-01-08 01:21:16.071741 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:16.071745 | orchestrator | 2026-01-08 01:21:16.071755 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:18.814973 | orchestrator | Thursday 08 January 2026 01:21:15 +0000 (0:00:00.267) 0:00:16.696 ****** 2026-01-08 01:21:18.815053 | orchestrator | 2026-01-08 01:21:18.815059 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:18.815072 | orchestrator | Thursday 08 January 2026 01:21:15 +0000 (0:00:00.076) 0:00:16.772 ****** 2026-01-08 01:21:18.815076 | orchestrator | 2026-01-08 01:21:18.815080 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:18.815084 | orchestrator | Thursday 08 January 2026 01:21:15 +0000 (0:00:00.075) 0:00:16.848 ****** 2026-01-08 01:21:18.815088 | orchestrator | 2026-01-08 01:21:18.815092 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-08 01:21:18.815096 | orchestrator | Thursday 08 January 2026 01:21:16 +0000 (0:00:00.079) 0:00:16.928 ****** 2026-01-08 01:21:18.815101 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:18.815104 | orchestrator | 2026-01-08 01:21:18.815108 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-08 01:21:18.815132 | orchestrator | Thursday 08 January 2026 01:21:17 +0000 (0:00:01.559) 0:00:18.488 ****** 2026-01-08 01:21:18.815136 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-08 01:21:18.815141 | orchestrator |  "msg": [ 2026-01-08 01:21:18.815146 | orchestrator |  "Validator run completed.", 2026-01-08 01:21:18.815150 | orchestrator |  "You can find the report file here:", 2026-01-08 01:21:18.815155 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-01-08T01:21:00+00:00-report.json", 2026-01-08 01:21:18.815160 | orchestrator |  "on the following host:", 2026-01-08 01:21:18.815164 | orchestrator |  "testbed-manager" 2026-01-08 01:21:18.815169 | orchestrator |  ] 2026-01-08 01:21:18.815173 | orchestrator | } 2026-01-08 01:21:18.815177 | orchestrator | 2026-01-08 01:21:18.815181 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:21:18.815186 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-08 01:21:18.815191 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:21:18.815195 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:21:18.815199 | orchestrator | 2026-01-08 01:21:18.815202 | orchestrator | 2026-01-08 01:21:18.815206 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:21:18.815210 | orchestrator | Thursday 08 January 2026 01:21:18 +0000 (0:00:00.861) 0:00:19.349 ****** 2026-01-08 01:21:18.815214 | orchestrator | =============================================================================== 2026-01-08 01:21:18.815217 | orchestrator | Aggregate test results step one ----------------------------------------- 1.74s 2026-01-08 01:21:18.815221 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.74s 2026-01-08 01:21:18.815225 | orchestrator | Gather status data ------------------------------------------------------ 1.68s 2026-01-08 01:21:18.815228 | orchestrator | Write report file ------------------------------------------------------- 1.56s 2026-01-08 01:21:18.815232 | orchestrator | Get container info ------------------------------------------------------ 1.16s 2026-01-08 01:21:18.815236 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-01-08 01:21:18.815239 | orchestrator | Get timestamp for report file ------------------------------------------- 0.91s 2026-01-08 01:21:18.815243 | orchestrator | Print report file information ------------------------------------------- 0.86s 2026-01-08 01:21:18.815247 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2026-01-08 01:21:18.815250 | orchestrator | Set quorum test data ---------------------------------------------------- 0.51s 2026-01-08 01:21:18.815254 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.50s 2026-01-08 01:21:18.815258 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.36s 2026-01-08 01:21:18.815261 | orchestrator | Set health test data ---------------------------------------------------- 0.35s 2026-01-08 01:21:18.815265 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2026-01-08 01:21:18.815269 | orchestrator | Fail cluster-health if health is not acceptable (strict) ---------------- 0.34s 2026-01-08 01:21:18.815272 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-01-08 01:21:18.815276 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2026-01-08 01:21:18.815280 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-01-08 01:21:18.815283 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2026-01-08 01:21:18.815287 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2026-01-08 01:21:19.158393 | orchestrator | + osism validate ceph-mgrs 2026-01-08 01:21:51.131041 | orchestrator | 2026-01-08 01:21:51.131163 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-01-08 01:21:51.131178 | orchestrator | 2026-01-08 01:21:51.131187 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-08 01:21:51.131194 | orchestrator | Thursday 08 January 2026 01:21:35 +0000 (0:00:00.437) 0:00:00.437 ****** 2026-01-08 01:21:51.131201 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:51.131208 | orchestrator | 2026-01-08 01:21:51.131213 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-08 01:21:51.131219 | orchestrator | Thursday 08 January 2026 01:21:36 +0000 (0:00:00.822) 0:00:01.259 ****** 2026-01-08 01:21:51.131225 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:51.131231 | orchestrator | 2026-01-08 01:21:51.131237 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-08 01:21:51.131244 | orchestrator | Thursday 08 January 2026 01:21:37 +0000 (0:00:01.028) 0:00:02.288 ****** 2026-01-08 01:21:51.131250 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131257 | orchestrator | 2026-01-08 01:21:51.131263 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-08 01:21:51.131269 | orchestrator | Thursday 08 January 2026 01:21:37 +0000 (0:00:00.132) 0:00:02.421 ****** 2026-01-08 01:21:51.131274 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131280 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:51.131285 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:51.131291 | orchestrator | 2026-01-08 01:21:51.131296 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-08 01:21:51.131302 | orchestrator | Thursday 08 January 2026 01:21:38 +0000 (0:00:00.298) 0:00:02.719 ****** 2026-01-08 01:21:51.131307 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:51.131313 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:51.131318 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131323 | orchestrator | 2026-01-08 01:21:51.131328 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-08 01:21:51.131333 | orchestrator | Thursday 08 January 2026 01:21:39 +0000 (0:00:01.157) 0:00:03.877 ****** 2026-01-08 01:21:51.131339 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:51.131344 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:21:51.131349 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:21:51.131355 | orchestrator | 2026-01-08 01:21:51.131360 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-08 01:21:51.131365 | orchestrator | Thursday 08 January 2026 01:21:39 +0000 (0:00:00.325) 0:00:04.202 ****** 2026-01-08 01:21:51.131371 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131377 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:51.131382 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:51.131388 | orchestrator | 2026-01-08 01:21:51.131393 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-08 01:21:51.131399 | orchestrator | Thursday 08 January 2026 01:21:40 +0000 (0:00:00.516) 0:00:04.719 ****** 2026-01-08 01:21:51.131405 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131410 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:51.131416 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:51.131421 | orchestrator | 2026-01-08 01:21:51.131427 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-01-08 01:21:51.131432 | orchestrator | Thursday 08 January 2026 01:21:40 +0000 (0:00:00.368) 0:00:05.088 ****** 2026-01-08 01:21:51.131438 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:51.131443 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:21:51.131449 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:21:51.131455 | orchestrator | 2026-01-08 01:21:51.131461 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-01-08 01:21:51.131467 | orchestrator | Thursday 08 January 2026 01:21:40 +0000 (0:00:00.282) 0:00:05.371 ****** 2026-01-08 01:21:51.131473 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131499 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:21:51.131506 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:21:51.131513 | orchestrator | 2026-01-08 01:21:51.131520 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-08 01:21:51.131526 | orchestrator | Thursday 08 January 2026 01:21:41 +0000 (0:00:00.489) 0:00:05.860 ****** 2026-01-08 01:21:51.131532 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:51.131539 | orchestrator | 2026-01-08 01:21:51.131544 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-08 01:21:51.131550 | orchestrator | Thursday 08 January 2026 01:21:41 +0000 (0:00:00.292) 0:00:06.152 ****** 2026-01-08 01:21:51.131555 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:51.131561 | orchestrator | 2026-01-08 01:21:51.131566 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-08 01:21:51.131573 | orchestrator | Thursday 08 January 2026 01:21:41 +0000 (0:00:00.252) 0:00:06.405 ****** 2026-01-08 01:21:51.131580 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:51.131587 | orchestrator | 2026-01-08 01:21:51.131593 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:51.131600 | orchestrator | Thursday 08 January 2026 01:21:42 +0000 (0:00:00.263) 0:00:06.669 ****** 2026-01-08 01:21:51.131607 | orchestrator | 2026-01-08 01:21:51.131613 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:51.131619 | orchestrator | Thursday 08 January 2026 01:21:42 +0000 (0:00:00.078) 0:00:06.747 ****** 2026-01-08 01:21:51.131625 | orchestrator | 2026-01-08 01:21:51.131632 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:51.131661 | orchestrator | Thursday 08 January 2026 01:21:42 +0000 (0:00:00.074) 0:00:06.821 ****** 2026-01-08 01:21:51.131688 | orchestrator | 2026-01-08 01:21:51.131694 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-08 01:21:51.131701 | orchestrator | Thursday 08 January 2026 01:21:42 +0000 (0:00:00.074) 0:00:06.895 ****** 2026-01-08 01:21:51.131707 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:51.131712 | orchestrator | 2026-01-08 01:21:51.131718 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-08 01:21:51.131725 | orchestrator | Thursday 08 January 2026 01:21:42 +0000 (0:00:00.256) 0:00:07.151 ****** 2026-01-08 01:21:51.131731 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:51.131737 | orchestrator | 2026-01-08 01:21:51.131762 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-01-08 01:21:51.131781 | orchestrator | Thursday 08 January 2026 01:21:42 +0000 (0:00:00.258) 0:00:07.410 ****** 2026-01-08 01:21:51.131788 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131794 | orchestrator | 2026-01-08 01:21:51.131801 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-01-08 01:21:51.131807 | orchestrator | Thursday 08 January 2026 01:21:43 +0000 (0:00:00.125) 0:00:07.535 ****** 2026-01-08 01:21:51.131813 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:21:51.131819 | orchestrator | 2026-01-08 01:21:51.131825 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-01-08 01:21:51.131831 | orchestrator | Thursday 08 January 2026 01:21:45 +0000 (0:00:02.402) 0:00:09.938 ****** 2026-01-08 01:21:51.131836 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131843 | orchestrator | 2026-01-08 01:21:51.131850 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-01-08 01:21:51.131857 | orchestrator | Thursday 08 January 2026 01:21:45 +0000 (0:00:00.462) 0:00:10.400 ****** 2026-01-08 01:21:51.131864 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131871 | orchestrator | 2026-01-08 01:21:51.131878 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-01-08 01:21:51.131885 | orchestrator | Thursday 08 January 2026 01:21:46 +0000 (0:00:00.351) 0:00:10.752 ****** 2026-01-08 01:21:51.131893 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:51.131900 | orchestrator | 2026-01-08 01:21:51.131907 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-01-08 01:21:51.131924 | orchestrator | Thursday 08 January 2026 01:21:46 +0000 (0:00:00.184) 0:00:10.937 ****** 2026-01-08 01:21:51.131932 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:21:51.131939 | orchestrator | 2026-01-08 01:21:51.131946 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-08 01:21:51.131952 | orchestrator | Thursday 08 January 2026 01:21:46 +0000 (0:00:00.156) 0:00:11.093 ****** 2026-01-08 01:21:51.131959 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:51.131966 | orchestrator | 2026-01-08 01:21:51.131973 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-08 01:21:51.131980 | orchestrator | Thursday 08 January 2026 01:21:46 +0000 (0:00:00.254) 0:00:11.348 ****** 2026-01-08 01:21:51.131986 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:21:51.131993 | orchestrator | 2026-01-08 01:21:51.132000 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-08 01:21:51.132006 | orchestrator | Thursday 08 January 2026 01:21:47 +0000 (0:00:00.260) 0:00:11.609 ****** 2026-01-08 01:21:51.132013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:51.132020 | orchestrator | 2026-01-08 01:21:51.132027 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-08 01:21:51.132033 | orchestrator | Thursday 08 January 2026 01:21:48 +0000 (0:00:01.249) 0:00:12.859 ****** 2026-01-08 01:21:51.132040 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:51.132047 | orchestrator | 2026-01-08 01:21:51.132053 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-08 01:21:51.132060 | orchestrator | Thursday 08 January 2026 01:21:48 +0000 (0:00:00.253) 0:00:13.112 ****** 2026-01-08 01:21:51.132067 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:51.132074 | orchestrator | 2026-01-08 01:21:51.132081 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:51.132088 | orchestrator | Thursday 08 January 2026 01:21:48 +0000 (0:00:00.265) 0:00:13.377 ****** 2026-01-08 01:21:51.132095 | orchestrator | 2026-01-08 01:21:51.132102 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:51.132108 | orchestrator | Thursday 08 January 2026 01:21:48 +0000 (0:00:00.069) 0:00:13.447 ****** 2026-01-08 01:21:51.132115 | orchestrator | 2026-01-08 01:21:51.132122 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:21:51.132129 | orchestrator | Thursday 08 January 2026 01:21:49 +0000 (0:00:00.071) 0:00:13.519 ****** 2026-01-08 01:21:51.132136 | orchestrator | 2026-01-08 01:21:51.132143 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-08 01:21:51.132149 | orchestrator | Thursday 08 January 2026 01:21:49 +0000 (0:00:00.262) 0:00:13.781 ****** 2026-01-08 01:21:51.132156 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-08 01:21:51.132163 | orchestrator | 2026-01-08 01:21:51.132170 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-08 01:21:51.132177 | orchestrator | Thursday 08 January 2026 01:21:50 +0000 (0:00:01.337) 0:00:15.118 ****** 2026-01-08 01:21:51.132184 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-08 01:21:51.132191 | orchestrator |  "msg": [ 2026-01-08 01:21:51.132199 | orchestrator |  "Validator run completed.", 2026-01-08 01:21:51.132205 | orchestrator |  "You can find the report file here:", 2026-01-08 01:21:51.132210 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-01-08T01:21:36+00:00-report.json", 2026-01-08 01:21:51.132217 | orchestrator |  "on the following host:", 2026-01-08 01:21:51.132223 | orchestrator |  "testbed-manager" 2026-01-08 01:21:51.132229 | orchestrator |  ] 2026-01-08 01:21:51.132236 | orchestrator | } 2026-01-08 01:21:51.132243 | orchestrator | 2026-01-08 01:21:51.132249 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:21:51.132261 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 01:21:51.132266 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:21:51.132278 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:21:51.477081 | orchestrator | 2026-01-08 01:21:51.477169 | orchestrator | 2026-01-08 01:21:51.477201 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:21:51.477211 | orchestrator | Thursday 08 January 2026 01:21:51 +0000 (0:00:00.444) 0:00:15.563 ****** 2026-01-08 01:21:51.477217 | orchestrator | =============================================================================== 2026-01-08 01:21:51.477224 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.40s 2026-01-08 01:21:51.477231 | orchestrator | Write report file ------------------------------------------------------- 1.34s 2026-01-08 01:21:51.477238 | orchestrator | Aggregate test results step one ----------------------------------------- 1.25s 2026-01-08 01:21:51.477245 | orchestrator | Get container info ------------------------------------------------------ 1.16s 2026-01-08 01:21:51.477251 | orchestrator | Create report output directory ------------------------------------------ 1.03s 2026-01-08 01:21:51.477258 | orchestrator | Get timestamp for report file ------------------------------------------- 0.82s 2026-01-08 01:21:51.477265 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-01-08 01:21:51.477272 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.49s 2026-01-08 01:21:51.477279 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.46s 2026-01-08 01:21:51.477286 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-01-08 01:21:51.477293 | orchestrator | Flush handlers ---------------------------------------------------------- 0.40s 2026-01-08 01:21:51.477297 | orchestrator | Prepare test data ------------------------------------------------------- 0.37s 2026-01-08 01:21:51.477301 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.35s 2026-01-08 01:21:51.477305 | orchestrator | Set test result to failed if container is missing ----------------------- 0.33s 2026-01-08 01:21:51.477309 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-01-08 01:21:51.477313 | orchestrator | Aggregate test results step one ----------------------------------------- 0.29s 2026-01-08 01:21:51.477317 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2026-01-08 01:21:51.477321 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-01-08 01:21:51.477325 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-01-08 01:21:51.477328 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2026-01-08 01:21:51.807404 | orchestrator | + osism validate ceph-osds 2026-01-08 01:22:13.203349 | orchestrator | 2026-01-08 01:22:13.203449 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-01-08 01:22:13.203460 | orchestrator | 2026-01-08 01:22:13.203467 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-08 01:22:13.203474 | orchestrator | Thursday 08 January 2026 01:22:08 +0000 (0:00:00.440) 0:00:00.440 ****** 2026-01-08 01:22:13.203481 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 01:22:13.203488 | orchestrator | 2026-01-08 01:22:13.203494 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-08 01:22:13.203500 | orchestrator | Thursday 08 January 2026 01:22:09 +0000 (0:00:00.846) 0:00:01.287 ****** 2026-01-08 01:22:13.203506 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 01:22:13.203513 | orchestrator | 2026-01-08 01:22:13.203519 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-08 01:22:13.203545 | orchestrator | Thursday 08 January 2026 01:22:09 +0000 (0:00:00.527) 0:00:01.814 ****** 2026-01-08 01:22:13.203552 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 01:22:13.203557 | orchestrator | 2026-01-08 01:22:13.203563 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-08 01:22:13.203569 | orchestrator | Thursday 08 January 2026 01:22:10 +0000 (0:00:00.787) 0:00:02.602 ****** 2026-01-08 01:22:13.203574 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:13.203582 | orchestrator | 2026-01-08 01:22:13.203588 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-08 01:22:13.203594 | orchestrator | Thursday 08 January 2026 01:22:10 +0000 (0:00:00.143) 0:00:02.746 ****** 2026-01-08 01:22:13.203600 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:13.203605 | orchestrator | 2026-01-08 01:22:13.203612 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-08 01:22:13.203617 | orchestrator | Thursday 08 January 2026 01:22:11 +0000 (0:00:00.126) 0:00:02.873 ****** 2026-01-08 01:22:13.203671 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:13.203677 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:22:13.203682 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:22:13.203687 | orchestrator | 2026-01-08 01:22:13.203693 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-08 01:22:13.203699 | orchestrator | Thursday 08 January 2026 01:22:11 +0000 (0:00:00.331) 0:00:03.204 ****** 2026-01-08 01:22:13.203705 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:13.203711 | orchestrator | 2026-01-08 01:22:13.203717 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-08 01:22:13.203723 | orchestrator | Thursday 08 January 2026 01:22:11 +0000 (0:00:00.151) 0:00:03.356 ****** 2026-01-08 01:22:13.203727 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:13.203731 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:13.203735 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:13.203739 | orchestrator | 2026-01-08 01:22:13.203743 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-01-08 01:22:13.203747 | orchestrator | Thursday 08 January 2026 01:22:11 +0000 (0:00:00.317) 0:00:03.673 ****** 2026-01-08 01:22:13.203751 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:13.203755 | orchestrator | 2026-01-08 01:22:13.203759 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-08 01:22:13.203763 | orchestrator | Thursday 08 January 2026 01:22:12 +0000 (0:00:00.804) 0:00:04.478 ****** 2026-01-08 01:22:13.203767 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:13.203771 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:13.203775 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:13.203778 | orchestrator | 2026-01-08 01:22:13.203782 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-01-08 01:22:13.203786 | orchestrator | Thursday 08 January 2026 01:22:12 +0000 (0:00:00.285) 0:00:04.763 ****** 2026-01-08 01:22:13.203792 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c0f49a3fbbbf5c9b7418402f1d7bb1a50a1f6b2e77dde969f14f3dde59656ece', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-08 01:22:13.203799 | orchestrator | skipping: [testbed-node-3] => (item={'id': '00d13ea1a100945341a7ab2a1348fa8351b9bf16cedbb5ef90c7c7a72de477e7', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-08 01:22:13.203805 | orchestrator | skipping: [testbed-node-3] => (item={'id': '28722025053b6c5bd04aa97fdb49cdea3c75c6dad4d471aa16136e661bef89ed', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-08 01:22:13.203811 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8dde294f290517a151db473aea1a1adb1209ab1b9c680c0dfc1d27ced58f5089', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-01-08 01:22:13.203829 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5f3dca4d95e1a31b7996b3f849ad05310854f2c9f97e78f069a88b6be8636221', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-01-08 01:22:13.203848 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a2f543b9f1e458a3c06b7fdfe09cfbe117bcf788503b8f351237b5c70e4d3c60', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2026-01-08 01:22:13.203852 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5208d2c8a8769dad7ccb9145a1ef530696c66013e97e1dd4cf4bfd4d1dcdcd57', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-01-08 01:22:13.203858 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b19da055c2136de876e268d9d347278709d8d0b1607453e7abe6bb09855343a3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-01-08 01:22:13.203863 | orchestrator | skipping: [testbed-node-3] => (item={'id': '13b87361425371b5218e78ae22366a9944de16ed9f446232ca3d48af3aa6c9be', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-08 01:22:13.203867 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bec157552208112a19fc9aaf271776d9318c905dd19fc2c8e822b7a7ab0560a9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-08 01:22:13.203871 | orchestrator | ok: [testbed-node-3] => (item={'id': '0be0e15a3ca84aff4de0e3bee6efcdb8e7af574aaa0191ca5d64b7102ee156f2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-08 01:22:13.203875 | orchestrator | ok: [testbed-node-3] => (item={'id': 'b464e3d6a702fb2332b3dc8d0189481d88c74757788d43bcc20a3c2dcf80aab6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-08 01:22:13.203879 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7c836e8a83dfdd57d4d80ad1bc29590eaf825d724d3f06aa2bc39b15e64dc546', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-01-08 01:22:13.203890 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f2fa2cb1d0217db49270d307199e2acb1af90272ff98021fd088c4bbe2cd2960', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-08 01:22:13.203897 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9f6f0be0b644d0918a11d6108df2f4ab64fd311ca6c5dd0e3971c28fe08c4996', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-08 01:22:13.203902 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8a55cab51f1fa0bd909f648791aae49fb1b29e4c2b79e1cf2e1885a0a0b12d87', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-08 01:22:13.203906 | orchestrator | skipping: [testbed-node-3] => (item={'id': '921b9c5c68682c41041257449711df95f528beb96758b0b44b0e02cd3ade2198', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-08 01:22:13.203911 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ad68ae14fc017952101caf0cbd38c915d68a0d6a7a3ab8cd19b0b3967352a433', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-08 01:22:13.203919 | orchestrator | skipping: [testbed-node-4] => (item={'id': '03fea2174d5cbb1901d1db2697dcdede809ceef42902bf89fef8dfcf01394c7e', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-08 01:22:13.203924 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e5f8814bd55ec31ff5cce9a7e0d452c29c1d493af95f9e711f756650e15209ff', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-08 01:22:13.203929 | orchestrator | skipping: [testbed-node-4] => (item={'id': '981b0de56e6b899c088e03e1508baafd783eb7def1dbcf3d055e93313515227f', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-08 01:22:13.203937 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fe08dbe736cf5337705ca24e026fa61363214363375342985f9b42cf7cc237a7', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-01-08 01:22:13.446698 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e4612007d1d5aaef4f4797f71568121b0aa56b832b4cfaeb3f8ecfb87fb74ae1', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-01-08 01:22:13.446794 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6bef58cd8eaaa94f1a04265c2a9620d1d7b495c2138c5b4c9c785cabccae6d74', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2026-01-08 01:22:13.446805 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7892ed151b437d028dc32511db7d6961d6e3089093f89cf53a5f73e807447dbd', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-01-08 01:22:13.446814 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e117274bf6d6b3e7a1b25d6797d425a6749fab210b29d1ee1a973e7ba7819f5f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-01-08 01:22:13.446822 | orchestrator | skipping: [testbed-node-4] => (item={'id': '97f532aa00f5fc8f19d7d78de1645f5ab5dfa9060c71824103bd7d7625efc182', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-08 01:22:13.446830 | orchestrator | skipping: [testbed-node-4] => (item={'id': '972e0e257c8b4fa5f7d3c6fbee183ba179486ce2ec70c471d1065ff32860071f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-08 01:22:13.446838 | orchestrator | ok: [testbed-node-4] => (item={'id': '695fc9dc8ff669aff7e4f33871117499fdbb94bf2c9d21ad6c4db2942f407449', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-08 01:22:13.446847 | orchestrator | ok: [testbed-node-4] => (item={'id': '4116790b044bb128fc86c90454c08ef83ad28324a19a1321c0e7f5c6493a8a54', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-08 01:22:13.446871 | orchestrator | skipping: [testbed-node-4] => (item={'id': '732f89d7c5e8a3ff233851a9bc9ce3e547a724a2c99d10acdfb8fc008659a11d', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-01-08 01:22:13.446880 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b523dc059d6ba4ed467cc99e0f876291408159494e9789c30b4f065b901d1257', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-08 01:22:13.446908 | orchestrator | skipping: [testbed-node-4] => (item={'id': '463ff76e0640321d0ed82c6c74398bf0d46e7840477ec455a70b2613ae758e5f', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-08 01:22:13.446917 | orchestrator | skipping: [testbed-node-4] => (item={'id': '73114e92742453f1f496263382eb7070a3b066730ce79b8f22efe5e6452485c1', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-08 01:22:13.446925 | orchestrator | skipping: [testbed-node-4] => (item={'id': '078228de4eea0d58e86b3b13429d22b645a66f7098035d113030305b5edcbbb7', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-08 01:22:13.446932 | orchestrator | skipping: [testbed-node-4] => (item={'id': '85b87cb7763b8926196c52c99e9657eadc085bdcfe16b3e6001bc9d3e55fa02f', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-08 01:22:13.446940 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b3985836166bfa4fb047d2f85714f2aa9048fbc4ecd8e6d69c70197136fdabb2', 'image': 'registry.osism.tech/kolla/nova-compute:2025.1', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-08 01:22:13.446964 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2906fde2dc5852f8c10402ee4bdf4f84de3d3ecc2c558b33be9841169ff5924', 'image': 'registry.osism.tech/kolla/nova-libvirt:2025.1', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-08 01:22:13.446972 | orchestrator | skipping: [testbed-node-5] => (item={'id': '30ab57daf087bfa21fecea1076a8dd71292a56e62e1b3651b052b90ca5e3efb2', 'image': 'registry.osism.tech/kolla/nova-ssh:2025.1', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2026-01-08 01:22:13.446979 | orchestrator | skipping: [testbed-node-5] => (item={'id': '509e03eb8fb0525028838e8bbab1cd4a8fed07cb1da22751d72a145cbcc88916', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2025.1', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2026-01-08 01:22:13.446986 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb443e763145ff3cc65a342ceefb530334faedf7135de2a0859f1b1ff6416099', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2025.1', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-01-08 01:22:13.446994 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5e5249f961048a34e72b8abb874e9cd293cf568ca0494d198ccc85b2893bff9b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2025.1', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2026-01-08 01:22:13.447001 | orchestrator | skipping: [testbed-node-5] => (item={'id': '86a1571393eae8cef70cee2ce0256ea60df76ba106b67f854446619cd04c21d0', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2025.1', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 17 minutes'})  2026-01-08 01:22:13.447008 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2481f2fd4bb3b6bb5238627e9ce40b5b1a56c21946abb91621689710466b6d1f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-01-08 01:22:13.447015 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e4d71e3dcaa2d170a67850caf3df66bd26bdf4e1b233cd2257bf2f8ff65a5abe', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-08 01:22:13.447027 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dbfba760dd9e780b78760f872ad40b89e303746fbe673dadb52e9eef7c4b9f75', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-01-08 01:22:13.447041 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ce4e2180221cb89b797cd11bea2611384517ced9560baa276d75b980c7a4bb81', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-08 01:22:13.447048 | orchestrator | ok: [testbed-node-5] => (item={'id': '8346731266bb1dc7b80cc846c9c16c652854de29bdf6bf202ef474ddcb1421a4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-01-08 01:22:13.447056 | orchestrator | skipping: [testbed-node-5] => (item={'id': '60814b7904a29e92b60e194ce57d9021e85adacbc6f1b14d8e500fae8b72f252', 'image': 'registry.osism.tech/kolla/ovn-controller:2025.1', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-01-08 01:22:13.447063 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fabaf8658b58c8eb8da863d1a64dc28e6c53556a640a768ad22a5d59f11e7984', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2025.1', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-08 01:22:13.447071 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3e83507ad689e05a175242f24dcd859fd897fc0a5e643abc0dd46c6ea00cb281', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2025.1', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-01-08 01:22:13.447078 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f90d2a0c082fd721a66948d7b6c242aaaf28fc4e9e00ec4826dc70e63c8305cd', 'image': 'registry.osism.tech/kolla/cron:2025.1', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-08 01:22:13.447085 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0952fe1e2cae5f09dfd131fecc3505fe027257755983efd6d3b1b8aa194f9955', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2025.1', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-01-08 01:22:13.447098 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3064250a242c61d9549c42f26c343f1801950833449b340e5555d7a508a9f014', 'image': 'registry.osism.tech/kolla/fluentd:2025.1', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-01-08 01:22:27.834795 | orchestrator | 2026-01-08 01:22:27.834856 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-01-08 01:22:27.834865 | orchestrator | Thursday 08 January 2026 01:22:13 +0000 (0:00:00.507) 0:00:05.270 ****** 2026-01-08 01:22:27.834871 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.834877 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.834882 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.834887 | orchestrator | 2026-01-08 01:22:27.834892 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-01-08 01:22:27.834898 | orchestrator | Thursday 08 January 2026 01:22:13 +0000 (0:00:00.307) 0:00:05.578 ****** 2026-01-08 01:22:27.834903 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.834909 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:22:27.834914 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:22:27.834919 | orchestrator | 2026-01-08 01:22:27.834924 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-01-08 01:22:27.834929 | orchestrator | Thursday 08 January 2026 01:22:14 +0000 (0:00:00.485) 0:00:06.063 ****** 2026-01-08 01:22:27.834934 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.834939 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.834945 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.834950 | orchestrator | 2026-01-08 01:22:27.834955 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-08 01:22:27.834960 | orchestrator | Thursday 08 January 2026 01:22:14 +0000 (0:00:00.305) 0:00:06.368 ****** 2026-01-08 01:22:27.834965 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.834983 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.834989 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.834994 | orchestrator | 2026-01-08 01:22:27.834999 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-01-08 01:22:27.835004 | orchestrator | Thursday 08 January 2026 01:22:14 +0000 (0:00:00.305) 0:00:06.674 ****** 2026-01-08 01:22:27.835009 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-01-08 01:22:27.835015 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-01-08 01:22:27.835020 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835025 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-01-08 01:22:27.835030 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-01-08 01:22:27.835035 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:22:27.835041 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-01-08 01:22:27.835046 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-01-08 01:22:27.835051 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:22:27.835056 | orchestrator | 2026-01-08 01:22:27.835061 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-01-08 01:22:27.835066 | orchestrator | Thursday 08 January 2026 01:22:15 +0000 (0:00:00.321) 0:00:06.995 ****** 2026-01-08 01:22:27.835071 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835076 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.835081 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.835086 | orchestrator | 2026-01-08 01:22:27.835092 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-08 01:22:27.835097 | orchestrator | Thursday 08 January 2026 01:22:15 +0000 (0:00:00.537) 0:00:07.533 ****** 2026-01-08 01:22:27.835102 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835107 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:22:27.835112 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:22:27.835117 | orchestrator | 2026-01-08 01:22:27.835122 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-08 01:22:27.835127 | orchestrator | Thursday 08 January 2026 01:22:15 +0000 (0:00:00.308) 0:00:07.841 ****** 2026-01-08 01:22:27.835132 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835137 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:22:27.835143 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:22:27.835148 | orchestrator | 2026-01-08 01:22:27.835153 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-01-08 01:22:27.835158 | orchestrator | Thursday 08 January 2026 01:22:16 +0000 (0:00:00.342) 0:00:08.184 ****** 2026-01-08 01:22:27.835188 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835195 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.835200 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.835205 | orchestrator | 2026-01-08 01:22:27.835210 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-08 01:22:27.835215 | orchestrator | Thursday 08 January 2026 01:22:16 +0000 (0:00:00.310) 0:00:08.495 ****** 2026-01-08 01:22:27.835220 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835225 | orchestrator | 2026-01-08 01:22:27.835230 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-08 01:22:27.835235 | orchestrator | Thursday 08 January 2026 01:22:17 +0000 (0:00:00.678) 0:00:09.173 ****** 2026-01-08 01:22:27.835240 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835245 | orchestrator | 2026-01-08 01:22:27.835250 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-08 01:22:27.835256 | orchestrator | Thursday 08 January 2026 01:22:17 +0000 (0:00:00.304) 0:00:09.478 ****** 2026-01-08 01:22:27.835261 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835270 | orchestrator | 2026-01-08 01:22:27.835275 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:22:27.835280 | orchestrator | Thursday 08 January 2026 01:22:17 +0000 (0:00:00.270) 0:00:09.749 ****** 2026-01-08 01:22:27.835285 | orchestrator | 2026-01-08 01:22:27.835290 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:22:27.835295 | orchestrator | Thursday 08 January 2026 01:22:17 +0000 (0:00:00.067) 0:00:09.816 ****** 2026-01-08 01:22:27.835300 | orchestrator | 2026-01-08 01:22:27.835305 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:22:27.835319 | orchestrator | Thursday 08 January 2026 01:22:18 +0000 (0:00:00.069) 0:00:09.886 ****** 2026-01-08 01:22:27.835325 | orchestrator | 2026-01-08 01:22:27.835330 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-08 01:22:27.835335 | orchestrator | Thursday 08 January 2026 01:22:18 +0000 (0:00:00.073) 0:00:09.960 ****** 2026-01-08 01:22:27.835340 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835345 | orchestrator | 2026-01-08 01:22:27.835350 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-01-08 01:22:27.835355 | orchestrator | Thursday 08 January 2026 01:22:18 +0000 (0:00:00.248) 0:00:10.208 ****** 2026-01-08 01:22:27.835360 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835365 | orchestrator | 2026-01-08 01:22:27.835370 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-08 01:22:27.835375 | orchestrator | Thursday 08 January 2026 01:22:18 +0000 (0:00:00.245) 0:00:10.453 ****** 2026-01-08 01:22:27.835380 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835385 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.835391 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.835397 | orchestrator | 2026-01-08 01:22:27.835403 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-01-08 01:22:27.835409 | orchestrator | Thursday 08 January 2026 01:22:18 +0000 (0:00:00.288) 0:00:10.741 ****** 2026-01-08 01:22:27.835415 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835421 | orchestrator | 2026-01-08 01:22:27.835427 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-01-08 01:22:27.835433 | orchestrator | Thursday 08 January 2026 01:22:19 +0000 (0:00:00.703) 0:00:11.445 ****** 2026-01-08 01:22:27.835439 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-08 01:22:27.835445 | orchestrator | 2026-01-08 01:22:27.835452 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-01-08 01:22:27.835457 | orchestrator | Thursday 08 January 2026 01:22:21 +0000 (0:00:01.972) 0:00:13.417 ****** 2026-01-08 01:22:27.835463 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835469 | orchestrator | 2026-01-08 01:22:27.835475 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-01-08 01:22:27.835481 | orchestrator | Thursday 08 January 2026 01:22:21 +0000 (0:00:00.154) 0:00:13.572 ****** 2026-01-08 01:22:27.835487 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835493 | orchestrator | 2026-01-08 01:22:27.835498 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-01-08 01:22:27.835504 | orchestrator | Thursday 08 January 2026 01:22:22 +0000 (0:00:00.317) 0:00:13.889 ****** 2026-01-08 01:22:27.835510 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835516 | orchestrator | 2026-01-08 01:22:27.835523 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-01-08 01:22:27.835529 | orchestrator | Thursday 08 January 2026 01:22:22 +0000 (0:00:00.118) 0:00:14.008 ****** 2026-01-08 01:22:27.835535 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835541 | orchestrator | 2026-01-08 01:22:27.835549 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-08 01:22:27.835555 | orchestrator | Thursday 08 January 2026 01:22:22 +0000 (0:00:00.128) 0:00:14.136 ****** 2026-01-08 01:22:27.835561 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835567 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.835576 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.835582 | orchestrator | 2026-01-08 01:22:27.835588 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-01-08 01:22:27.835594 | orchestrator | Thursday 08 January 2026 01:22:22 +0000 (0:00:00.302) 0:00:14.439 ****** 2026-01-08 01:22:27.835599 | orchestrator | changed: [testbed-node-3] 2026-01-08 01:22:27.835605 | orchestrator | changed: [testbed-node-4] 2026-01-08 01:22:27.835659 | orchestrator | changed: [testbed-node-5] 2026-01-08 01:22:27.835668 | orchestrator | 2026-01-08 01:22:27.835677 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-01-08 01:22:27.835685 | orchestrator | Thursday 08 January 2026 01:22:25 +0000 (0:00:02.916) 0:00:17.356 ****** 2026-01-08 01:22:27.835691 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835697 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.835703 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.835709 | orchestrator | 2026-01-08 01:22:27.835715 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-01-08 01:22:27.835721 | orchestrator | Thursday 08 January 2026 01:22:25 +0000 (0:00:00.312) 0:00:17.668 ****** 2026-01-08 01:22:27.835727 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835732 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.835738 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.835744 | orchestrator | 2026-01-08 01:22:27.835750 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-01-08 01:22:27.835756 | orchestrator | Thursday 08 January 2026 01:22:26 +0000 (0:00:00.524) 0:00:18.193 ****** 2026-01-08 01:22:27.835762 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835768 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:22:27.835773 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:22:27.835778 | orchestrator | 2026-01-08 01:22:27.835783 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-01-08 01:22:27.835788 | orchestrator | Thursday 08 January 2026 01:22:26 +0000 (0:00:00.307) 0:00:18.500 ****** 2026-01-08 01:22:27.835793 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:27.835798 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:27.835803 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:27.835808 | orchestrator | 2026-01-08 01:22:27.835813 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-01-08 01:22:27.835818 | orchestrator | Thursday 08 January 2026 01:22:27 +0000 (0:00:00.541) 0:00:19.042 ****** 2026-01-08 01:22:27.835823 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835828 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:22:27.835834 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:22:27.835838 | orchestrator | 2026-01-08 01:22:27.835846 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-01-08 01:22:27.835857 | orchestrator | Thursday 08 January 2026 01:22:27 +0000 (0:00:00.315) 0:00:19.357 ****** 2026-01-08 01:22:27.835869 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:27.835877 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:22:27.835885 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:22:27.835893 | orchestrator | 2026-01-08 01:22:27.835906 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-08 01:22:35.794209 | orchestrator | Thursday 08 January 2026 01:22:27 +0000 (0:00:00.307) 0:00:19.665 ****** 2026-01-08 01:22:35.794302 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:35.794315 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:35.794321 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:35.794327 | orchestrator | 2026-01-08 01:22:35.794347 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-01-08 01:22:35.794361 | orchestrator | Thursday 08 January 2026 01:22:28 +0000 (0:00:00.494) 0:00:20.160 ****** 2026-01-08 01:22:35.794368 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:35.794374 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:35.794380 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:35.794386 | orchestrator | 2026-01-08 01:22:35.794415 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-01-08 01:22:35.794422 | orchestrator | Thursday 08 January 2026 01:22:29 +0000 (0:00:00.746) 0:00:20.906 ****** 2026-01-08 01:22:35.794428 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:35.794434 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:35.794440 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:35.794447 | orchestrator | 2026-01-08 01:22:35.794453 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-01-08 01:22:35.794459 | orchestrator | Thursday 08 January 2026 01:22:29 +0000 (0:00:00.342) 0:00:21.249 ****** 2026-01-08 01:22:35.794466 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:35.794474 | orchestrator | skipping: [testbed-node-4] 2026-01-08 01:22:35.794478 | orchestrator | skipping: [testbed-node-5] 2026-01-08 01:22:35.794482 | orchestrator | 2026-01-08 01:22:35.794486 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-01-08 01:22:35.794490 | orchestrator | Thursday 08 January 2026 01:22:29 +0000 (0:00:00.313) 0:00:21.562 ****** 2026-01-08 01:22:35.794494 | orchestrator | ok: [testbed-node-3] 2026-01-08 01:22:35.794497 | orchestrator | ok: [testbed-node-4] 2026-01-08 01:22:35.794501 | orchestrator | ok: [testbed-node-5] 2026-01-08 01:22:35.794505 | orchestrator | 2026-01-08 01:22:35.794509 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-08 01:22:35.794513 | orchestrator | Thursday 08 January 2026 01:22:30 +0000 (0:00:00.542) 0:00:22.105 ****** 2026-01-08 01:22:35.794517 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 01:22:35.794521 | orchestrator | 2026-01-08 01:22:35.794525 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-08 01:22:35.794528 | orchestrator | Thursday 08 January 2026 01:22:30 +0000 (0:00:00.309) 0:00:22.414 ****** 2026-01-08 01:22:35.794532 | orchestrator | skipping: [testbed-node-3] 2026-01-08 01:22:35.794536 | orchestrator | 2026-01-08 01:22:35.794539 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-08 01:22:35.794543 | orchestrator | Thursday 08 January 2026 01:22:30 +0000 (0:00:00.252) 0:00:22.667 ****** 2026-01-08 01:22:35.794558 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 01:22:35.794562 | orchestrator | 2026-01-08 01:22:35.794566 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-08 01:22:35.794569 | orchestrator | Thursday 08 January 2026 01:22:32 +0000 (0:00:01.688) 0:00:24.356 ****** 2026-01-08 01:22:35.794573 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 01:22:35.794577 | orchestrator | 2026-01-08 01:22:35.794581 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-08 01:22:35.794585 | orchestrator | Thursday 08 January 2026 01:22:32 +0000 (0:00:00.261) 0:00:24.618 ****** 2026-01-08 01:22:35.794588 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 01:22:35.794593 | orchestrator | 2026-01-08 01:22:35.794599 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:22:35.794656 | orchestrator | Thursday 08 January 2026 01:22:33 +0000 (0:00:00.271) 0:00:24.889 ****** 2026-01-08 01:22:35.794663 | orchestrator | 2026-01-08 01:22:35.794669 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:22:35.794675 | orchestrator | Thursday 08 January 2026 01:22:33 +0000 (0:00:00.074) 0:00:24.963 ****** 2026-01-08 01:22:35.794681 | orchestrator | 2026-01-08 01:22:35.794687 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-08 01:22:35.794692 | orchestrator | Thursday 08 January 2026 01:22:33 +0000 (0:00:00.068) 0:00:25.031 ****** 2026-01-08 01:22:35.794698 | orchestrator | 2026-01-08 01:22:35.794703 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-08 01:22:35.794709 | orchestrator | Thursday 08 January 2026 01:22:33 +0000 (0:00:00.075) 0:00:25.107 ****** 2026-01-08 01:22:35.794715 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-08 01:22:35.794721 | orchestrator | 2026-01-08 01:22:35.794735 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-08 01:22:35.794740 | orchestrator | Thursday 08 January 2026 01:22:34 +0000 (0:00:01.586) 0:00:26.694 ****** 2026-01-08 01:22:35.794746 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-01-08 01:22:35.794752 | orchestrator |  "msg": [ 2026-01-08 01:22:35.794758 | orchestrator |  "Validator run completed.", 2026-01-08 01:22:35.794765 | orchestrator |  "You can find the report file here:", 2026-01-08 01:22:35.794770 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-01-08T01:22:09+00:00-report.json", 2026-01-08 01:22:35.794777 | orchestrator |  "on the following host:", 2026-01-08 01:22:35.794783 | orchestrator |  "testbed-manager" 2026-01-08 01:22:35.794789 | orchestrator |  ] 2026-01-08 01:22:35.794796 | orchestrator | } 2026-01-08 01:22:35.794802 | orchestrator | 2026-01-08 01:22:35.794808 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:22:35.794815 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-08 01:22:35.794823 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 01:22:35.794846 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-08 01:22:35.794852 | orchestrator | 2026-01-08 01:22:35.794858 | orchestrator | 2026-01-08 01:22:35.794863 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:22:35.794869 | orchestrator | Thursday 08 January 2026 01:22:35 +0000 (0:00:00.597) 0:00:27.291 ****** 2026-01-08 01:22:35.794875 | orchestrator | =============================================================================== 2026-01-08 01:22:35.794880 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.92s 2026-01-08 01:22:35.794886 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.97s 2026-01-08 01:22:35.794892 | orchestrator | Aggregate test results step one ----------------------------------------- 1.69s 2026-01-08 01:22:35.794898 | orchestrator | Write report file ------------------------------------------------------- 1.59s 2026-01-08 01:22:35.794903 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-01-08 01:22:35.794909 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.80s 2026-01-08 01:22:35.794915 | orchestrator | Create report output directory ------------------------------------------ 0.79s 2026-01-08 01:22:35.794920 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.75s 2026-01-08 01:22:35.794926 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.70s 2026-01-08 01:22:35.794932 | orchestrator | Aggregate test results step one ----------------------------------------- 0.68s 2026-01-08 01:22:35.794938 | orchestrator | Print report file information ------------------------------------------- 0.60s 2026-01-08 01:22:35.794943 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.54s 2026-01-08 01:22:35.794949 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.54s 2026-01-08 01:22:35.794955 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.54s 2026-01-08 01:22:35.794961 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.53s 2026-01-08 01:22:35.794966 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.52s 2026-01-08 01:22:35.794972 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.51s 2026-01-08 01:22:35.794978 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2026-01-08 01:22:35.794984 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.49s 2026-01-08 01:22:35.794990 | orchestrator | Calculate sub test expression results ----------------------------------- 0.34s 2026-01-08 01:22:36.117666 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-01-08 01:22:36.126453 | orchestrator | + set -e 2026-01-08 01:22:36.126547 | orchestrator | + source /opt/manager-vars.sh 2026-01-08 01:22:36.126562 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-08 01:22:36.126568 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-08 01:22:36.126575 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-08 01:22:36.126581 | orchestrator | ++ CEPH_VERSION=reef 2026-01-08 01:22:36.126587 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-08 01:22:36.126593 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-08 01:22:36.126600 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-08 01:22:36.126628 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-08 01:22:36.126633 | orchestrator | ++ export OPENSTACK_VERSION=2025.1 2026-01-08 01:22:36.126640 | orchestrator | ++ OPENSTACK_VERSION=2025.1 2026-01-08 01:22:36.126646 | orchestrator | ++ export ARA=false 2026-01-08 01:22:36.126652 | orchestrator | ++ ARA=false 2026-01-08 01:22:36.126658 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-08 01:22:36.126664 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-08 01:22:36.126670 | orchestrator | ++ export TEMPEST=true 2026-01-08 01:22:36.126675 | orchestrator | ++ TEMPEST=true 2026-01-08 01:22:36.126681 | orchestrator | ++ export IS_ZUUL=true 2026-01-08 01:22:36.126687 | orchestrator | ++ IS_ZUUL=true 2026-01-08 01:22:36.126693 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 01:22:36.126699 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.62 2026-01-08 01:22:36.126712 | orchestrator | ++ export EXTERNAL_API=false 2026-01-08 01:22:36.126717 | orchestrator | ++ EXTERNAL_API=false 2026-01-08 01:22:36.126722 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-08 01:22:36.126727 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-08 01:22:36.126733 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-08 01:22:36.126739 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-08 01:22:36.126745 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-08 01:22:36.126750 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-08 01:22:36.126756 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-08 01:22:36.126762 | orchestrator | + source /etc/os-release 2026-01-08 01:22:36.126767 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-01-08 01:22:36.126773 | orchestrator | ++ NAME=Ubuntu 2026-01-08 01:22:36.126778 | orchestrator | ++ VERSION_ID=24.04 2026-01-08 01:22:36.126783 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-01-08 01:22:36.126789 | orchestrator | ++ VERSION_CODENAME=noble 2026-01-08 01:22:36.126794 | orchestrator | ++ ID=ubuntu 2026-01-08 01:22:36.126799 | orchestrator | ++ ID_LIKE=debian 2026-01-08 01:22:36.126805 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-01-08 01:22:36.126811 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-01-08 01:22:36.126816 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-01-08 01:22:36.126823 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-01-08 01:22:36.126829 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-01-08 01:22:36.127120 | orchestrator | ++ LOGO=ubuntu-logo 2026-01-08 01:22:36.127138 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-01-08 01:22:36.127144 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-01-08 01:22:36.127150 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-08 01:22:36.160990 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-08 01:23:00.537512 | orchestrator | 2026-01-08 01:23:00.537696 | orchestrator | # Status of Elasticsearch 2026-01-08 01:23:00.537714 | orchestrator | 2026-01-08 01:23:00.537720 | orchestrator | + pushd /opt/configuration/contrib 2026-01-08 01:23:00.537728 | orchestrator | + echo 2026-01-08 01:23:00.537734 | orchestrator | + echo '# Status of Elasticsearch' 2026-01-08 01:23:00.537740 | orchestrator | + echo 2026-01-08 01:23:00.537746 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-01-08 01:23:00.703103 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-01-08 01:23:00.703210 | orchestrator | 2026-01-08 01:23:00.703217 | orchestrator | # Status of MariaDB 2026-01-08 01:23:00.703244 | orchestrator | 2026-01-08 01:23:00.703248 | orchestrator | + echo 2026-01-08 01:23:00.703253 | orchestrator | + echo '# Status of MariaDB' 2026-01-08 01:23:00.703257 | orchestrator | + echo 2026-01-08 01:23:00.704217 | orchestrator | ++ semver latest 10.0.0-0 2026-01-08 01:23:00.756443 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-08 01:23:00.756529 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-08 01:23:00.756540 | orchestrator | + osism status database 2026-01-08 01:23:02.831447 | orchestrator | 2026-01-08 01:23:02 | ERROR  | Unable to get ansible vault password 2026-01-08 01:23:02.831496 | orchestrator | 2026-01-08 01:23:02 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-08 01:23:02.831502 | orchestrator | 2026-01-08 01:23:02 | ERROR  | Dropping encrypted entries 2026-01-08 01:23:02.863530 | orchestrator | 2026-01-08 01:23:02 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-01-08 01:23:02.871768 | orchestrator | 2026-01-08 01:23:02 | INFO  | Cluster Status: Primary 2026-01-08 01:23:02.871862 | orchestrator | 2026-01-08 01:23:02 | INFO  | Connected: ON 2026-01-08 01:23:02.871871 | orchestrator | 2026-01-08 01:23:02 | INFO  | Ready: ON 2026-01-08 01:23:02.871877 | orchestrator | 2026-01-08 01:23:02 | INFO  | Cluster Size: 3 2026-01-08 01:23:02.871883 | orchestrator | 2026-01-08 01:23:02 | INFO  | Local State: Synced 2026-01-08 01:23:02.871889 | orchestrator | 2026-01-08 01:23:02 | INFO  | Cluster State UUID: ff845e61-ec2c-11f0-a51a-aeab7d4ac120 2026-01-08 01:23:02.871896 | orchestrator | 2026-01-08 01:23:02 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-01-08 01:23:02.871903 | orchestrator | 2026-01-08 01:23:02 | INFO  | Galera Version: 26.4.24(ra6b53429) 2026-01-08 01:23:02.871909 | orchestrator | 2026-01-08 01:23:02 | INFO  | Local Node UUID: 322b201d-ec2d-11f0-8f10-2b1b8666dac9 2026-01-08 01:23:02.871971 | orchestrator | 2026-01-08 01:23:02 | INFO  | Flow Control Paused: 0.00% 2026-01-08 01:23:02.871985 | orchestrator | 2026-01-08 01:23:02 | INFO  | Recv Queue Avg: 0 2026-01-08 01:23:02.871995 | orchestrator | 2026-01-08 01:23:02 | INFO  | Send Queue Avg: 0.00072789 2026-01-08 01:23:02.872002 | orchestrator | 2026-01-08 01:23:02 | INFO  | Transactions: 5443 local commits, 8174 replicated, 148 received 2026-01-08 01:23:02.872006 | orchestrator | 2026-01-08 01:23:02 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-01-08 01:23:02.872185 | orchestrator | 2026-01-08 01:23:02 | INFO  | MariaDB Uptime: 24 minutes, 12 seconds 2026-01-08 01:23:02.872245 | orchestrator | 2026-01-08 01:23:02 | INFO  | Threads: 150 connected, 1 running 2026-01-08 01:23:02.872252 | orchestrator | 2026-01-08 01:23:02 | INFO  | Queries: 158586 total, 0 slow 2026-01-08 01:23:02.872410 | orchestrator | 2026-01-08 01:23:02 | INFO  | Aborted Connects: 47 2026-01-08 01:23:02.872569 | orchestrator | 2026-01-08 01:23:02 | INFO  | MariaDB Galera Cluster validation PASSED 2026-01-08 01:23:03.211425 | orchestrator | 2026-01-08 01:23:03.211491 | orchestrator | # Status of Prometheus 2026-01-08 01:23:03.211504 | orchestrator | 2026-01-08 01:23:03.211512 | orchestrator | + echo 2026-01-08 01:23:03.211522 | orchestrator | + echo '# Status of Prometheus' 2026-01-08 01:23:03.211531 | orchestrator | + echo 2026-01-08 01:23:03.211540 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-01-08 01:23:03.273207 | orchestrator | Unauthorized 2026-01-08 01:23:03.276702 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-01-08 01:23:03.324927 | orchestrator | Unauthorized 2026-01-08 01:23:03.328170 | orchestrator | 2026-01-08 01:23:03.328218 | orchestrator | # Status of RabbitMQ 2026-01-08 01:23:03.328223 | orchestrator | 2026-01-08 01:23:03.328228 | orchestrator | + echo 2026-01-08 01:23:03.328232 | orchestrator | + echo '# Status of RabbitMQ' 2026-01-08 01:23:03.328236 | orchestrator | + echo 2026-01-08 01:23:03.329735 | orchestrator | ++ semver latest 10.0.0-0 2026-01-08 01:23:03.399514 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-08 01:23:03.399563 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-08 01:23:03.399569 | orchestrator | + osism status messaging 2026-01-08 01:23:25.603879 | orchestrator | 2026-01-08 01:23:25 | ERROR  | Unable to get ansible vault password 2026-01-08 01:23:25.603986 | orchestrator | 2026-01-08 01:23:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-08 01:23:25.603998 | orchestrator | 2026-01-08 01:23:25 | ERROR  | Dropping encrypted entries 2026-01-08 01:23:25.638804 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-01-08 01:23:25.696206 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] RabbitMQ Version: 4.1.7 2026-01-08 01:23:25.696378 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Erlang Version: 27.3.4.1 2026-01-08 01:23:25.696391 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-01-08 01:23:25.696408 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Cluster Size: 3 2026-01-08 01:23:25.696417 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-08 01:23:25.696424 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-08 01:23:25.696727 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-01-08 01:23:25.696888 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Connections: 212, Channels: 211, Queues: 173 2026-01-08 01:23:25.697135 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Messages: 221 total, 221 ready, 0 unacked 2026-01-08 01:23:25.697155 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Message Rates: 6.4/s publish, 5.8/s deliver 2026-01-08 01:23:25.697437 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Disk Free: 58.9 GB (limit: 0.0 GB) 2026-01-08 01:23:25.697750 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Memory Used: 0.15 GB (limit: 18.81 GB) 2026-01-08 01:23:25.697903 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] File Descriptors: 104/1024 2026-01-08 01:23:25.698141 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-0] Sockets: 0/0 2026-01-08 01:23:25.698764 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-01-08 01:23:25.762297 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] RabbitMQ Version: 4.1.7 2026-01-08 01:23:25.762408 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Erlang Version: 27.3.4.1 2026-01-08 01:23:25.762418 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-01-08 01:23:25.762423 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Cluster Size: 3 2026-01-08 01:23:25.762435 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-08 01:23:25.763014 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-08 01:23:25.763074 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-01-08 01:23:25.763301 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Connections: 212, Channels: 211, Queues: 173 2026-01-08 01:23:25.763621 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Messages: 221 total, 221 ready, 0 unacked 2026-01-08 01:23:25.763904 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Message Rates: 6.4/s publish, 5.8/s deliver 2026-01-08 01:23:25.763983 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Disk Free: 58.9 GB (limit: 0.0 GB) 2026-01-08 01:23:25.764187 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Memory Used: 0.15 GB (limit: 18.81 GB) 2026-01-08 01:23:25.764612 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] File Descriptors: 117/1024 2026-01-08 01:23:25.764862 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-1] Sockets: 0/0 2026-01-08 01:23:25.765045 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-01-08 01:23:25.838448 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] RabbitMQ Version: 4.1.7 2026-01-08 01:23:25.838623 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Erlang Version: 27.3.4.1 2026-01-08 01:23:25.838644 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-01-08 01:23:25.838653 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Cluster Size: 3 2026-01-08 01:23:25.838660 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-08 01:23:25.838676 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-01-08 01:23:25.838681 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-01-08 01:23:25.838741 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Connections: 212, Channels: 211, Queues: 173 2026-01-08 01:23:25.839088 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Messages: 221 total, 221 ready, 0 unacked 2026-01-08 01:23:25.839288 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Message Rates: 6.4/s publish, 5.8/s deliver 2026-01-08 01:23:25.839563 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Disk Free: 58.9 GB (limit: 0.0 GB) 2026-01-08 01:23:25.839610 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Memory Used: 0.15 GB (limit: 18.81 GB) 2026-01-08 01:23:25.840038 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] File Descriptors: 111/1024 2026-01-08 01:23:25.840067 | orchestrator | 2026-01-08 01:23:25 | INFO  | [testbed-node-2] Sockets: 0/0 2026-01-08 01:23:25.840075 | orchestrator | 2026-01-08 01:23:25 | INFO  | RabbitMQ Cluster validation PASSED 2026-01-08 01:23:26.034699 | orchestrator | 2026-01-08 01:23:26.034792 | orchestrator | # Status of Redis 2026-01-08 01:23:26.034805 | orchestrator | 2026-01-08 01:23:26.034813 | orchestrator | + echo 2026-01-08 01:23:26.034821 | orchestrator | + echo '# Status of Redis' 2026-01-08 01:23:26.034828 | orchestrator | + echo 2026-01-08 01:23:26.034837 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-01-08 01:23:26.041392 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001606s;;;0.000000;10.000000 2026-01-08 01:23:26.041485 | orchestrator | 2026-01-08 01:23:26.041497 | orchestrator | # Create backup of MariaDB database 2026-01-08 01:23:26.041506 | orchestrator | 2026-01-08 01:23:26.041513 | orchestrator | + popd 2026-01-08 01:23:26.041519 | orchestrator | + echo 2026-01-08 01:23:26.041525 | orchestrator | + echo '# Create backup of MariaDB database' 2026-01-08 01:23:26.041530 | orchestrator | + echo 2026-01-08 01:23:26.041537 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-01-08 01:23:27.867755 | orchestrator | 2026-01-08 01:23:27 | INFO  | Task 4664633e-3017-45ca-ace3-a439c95be7a0 (mariadb_backup) was prepared for execution. 2026-01-08 01:23:27.867811 | orchestrator | 2026-01-08 01:23:27 | INFO  | It takes a moment until task 4664633e-3017-45ca-ace3-a439c95be7a0 (mariadb_backup) has been started and output is visible here. 2026-01-08 01:23:55.290079 | orchestrator | 2026-01-08 01:23:55.290134 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-08 01:23:55.290140 | orchestrator | 2026-01-08 01:23:55.290144 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-08 01:23:55.290149 | orchestrator | Thursday 08 January 2026 01:23:32 +0000 (0:00:00.203) 0:00:00.203 ****** 2026-01-08 01:23:55.290153 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:23:55.290159 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:23:55.290165 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:23:55.290173 | orchestrator | 2026-01-08 01:23:55.290182 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-08 01:23:55.290190 | orchestrator | Thursday 08 January 2026 01:23:32 +0000 (0:00:00.322) 0:00:00.525 ****** 2026-01-08 01:23:55.290197 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-08 01:23:55.290204 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-08 01:23:55.290219 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-08 01:23:55.290226 | orchestrator | 2026-01-08 01:23:55.290232 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-08 01:23:55.290238 | orchestrator | 2026-01-08 01:23:55.290244 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-08 01:23:55.290251 | orchestrator | Thursday 08 January 2026 01:23:32 +0000 (0:00:00.597) 0:00:01.123 ****** 2026-01-08 01:23:55.290256 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-08 01:23:55.290263 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-08 01:23:55.290269 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-08 01:23:55.290275 | orchestrator | 2026-01-08 01:23:55.290280 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-08 01:23:55.290286 | orchestrator | Thursday 08 January 2026 01:23:33 +0000 (0:00:00.418) 0:00:01.541 ****** 2026-01-08 01:23:55.290292 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-08 01:23:55.290299 | orchestrator | 2026-01-08 01:23:55.290306 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-01-08 01:23:55.290311 | orchestrator | Thursday 08 January 2026 01:23:33 +0000 (0:00:00.543) 0:00:02.084 ****** 2026-01-08 01:23:55.290317 | orchestrator | ok: [testbed-node-0] 2026-01-08 01:23:55.290323 | orchestrator | ok: [testbed-node-1] 2026-01-08 01:23:55.290329 | orchestrator | ok: [testbed-node-2] 2026-01-08 01:23:55.290335 | orchestrator | 2026-01-08 01:23:55.290341 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-01-08 01:23:55.290347 | orchestrator | Thursday 08 January 2026 01:23:37 +0000 (0:00:03.556) 0:00:05.640 ****** 2026-01-08 01:23:55.290352 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-08 01:23:55.290358 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-08 01:23:55.290364 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-08 01:23:55.290371 | orchestrator | mariadb_bootstrap_restart 2026-01-08 01:23:55.290377 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:23:55.290383 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:23:55.290389 | orchestrator | changed: [testbed-node-0] 2026-01-08 01:23:55.290395 | orchestrator | 2026-01-08 01:23:55.290400 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-08 01:23:55.290406 | orchestrator | skipping: no hosts matched 2026-01-08 01:23:55.290412 | orchestrator | 2026-01-08 01:23:55.290432 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-08 01:23:55.290438 | orchestrator | skipping: no hosts matched 2026-01-08 01:23:55.290444 | orchestrator | 2026-01-08 01:23:55.290449 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-08 01:23:55.290455 | orchestrator | skipping: no hosts matched 2026-01-08 01:23:55.290462 | orchestrator | 2026-01-08 01:23:55.290476 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-08 01:23:55.290487 | orchestrator | 2026-01-08 01:23:55.290494 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-08 01:23:55.290500 | orchestrator | Thursday 08 January 2026 01:23:54 +0000 (0:00:16.626) 0:00:22.267 ****** 2026-01-08 01:23:55.290505 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:23:55.290512 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:23:55.290518 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:23:55.290523 | orchestrator | 2026-01-08 01:23:55.290529 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-08 01:23:55.290535 | orchestrator | Thursday 08 January 2026 01:23:54 +0000 (0:00:00.318) 0:00:22.586 ****** 2026-01-08 01:23:55.290568 | orchestrator | skipping: [testbed-node-0] 2026-01-08 01:23:55.290574 | orchestrator | skipping: [testbed-node-1] 2026-01-08 01:23:55.290580 | orchestrator | skipping: [testbed-node-2] 2026-01-08 01:23:55.290586 | orchestrator | 2026-01-08 01:23:55.290591 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:23:55.290598 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-08 01:23:55.290605 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-08 01:23:55.290611 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-08 01:23:55.290617 | orchestrator | 2026-01-08 01:23:55.290623 | orchestrator | 2026-01-08 01:23:55.290629 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:23:55.290634 | orchestrator | Thursday 08 January 2026 01:23:54 +0000 (0:00:00.456) 0:00:23.042 ****** 2026-01-08 01:23:55.290640 | orchestrator | =============================================================================== 2026-01-08 01:23:55.290646 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 16.63s 2026-01-08 01:23:55.290664 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.56s 2026-01-08 01:23:55.290670 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-01-08 01:23:55.290676 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2026-01-08 01:23:55.290681 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.46s 2026-01-08 01:23:55.290687 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2026-01-08 01:23:55.290693 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-08 01:23:55.290698 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-01-08 01:23:55.636189 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-01-08 01:23:55.645008 | orchestrator | + set -e 2026-01-08 01:23:55.645054 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-08 01:23:55.645069 | orchestrator | ++ export INTERACTIVE=false 2026-01-08 01:23:55.645075 | orchestrator | ++ INTERACTIVE=false 2026-01-08 01:23:55.645079 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-08 01:23:55.645083 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-08 01:23:55.645092 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-08 01:23:55.646649 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-08 01:23:55.653483 | orchestrator | 2026-01-08 01:23:55.653524 | orchestrator | # OpenStack endpoints 2026-01-08 01:23:55.653584 | orchestrator | 2026-01-08 01:23:55.653589 | orchestrator | ++ export MANAGER_VERSION=latest 2026-01-08 01:23:55.653594 | orchestrator | ++ MANAGER_VERSION=latest 2026-01-08 01:23:55.653598 | orchestrator | + export OS_CLOUD=admin 2026-01-08 01:23:55.653602 | orchestrator | + OS_CLOUD=admin 2026-01-08 01:23:55.653606 | orchestrator | + echo 2026-01-08 01:23:55.653609 | orchestrator | + echo '# OpenStack endpoints' 2026-01-08 01:23:55.653613 | orchestrator | + echo 2026-01-08 01:23:55.653617 | orchestrator | + openstack endpoint list 2026-01-08 01:23:58.986655 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-08 01:23:58.986811 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-01-08 01:23:58.986824 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-08 01:23:58.986830 | orchestrator | | 098fe4233bab41fea81801b498cd9af3 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-01-08 01:23:58.986837 | orchestrator | | 1607a0547ee74877884cb72566864ff8 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-01-08 01:23:58.986844 | orchestrator | | 1dac32279c664bf289d54900dfe33f5c | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-01-08 01:23:58.986850 | orchestrator | | 30dacef98f6d47ebabb654a8ff45b7f3 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-08 01:23:58.986857 | orchestrator | | 33dd6b7a44e748f1a7207b37d3757ca0 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-01-08 01:23:58.986863 | orchestrator | | 3e2fcd09f74447898a831c7c295fefaf | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-01-08 01:23:58.986869 | orchestrator | | 40da772506144618bdf74c632c3ba009 | RegionOne | cinder | block-storage | True | public | https://api.testbed.osism.xyz:8776/v3 | 2026-01-08 01:23:58.986874 | orchestrator | | 5fbed4cd21ed437e941486040cc49fca | RegionOne | cinder | block-storage | True | internal | https://api-int.testbed.osism.xyz:8776/v3 | 2026-01-08 01:23:58.986881 | orchestrator | | 63548741352e47c1820486e43de76654 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-08 01:23:58.986887 | orchestrator | | 63d3423c39f94e5a95cf07d4a2c9b2ea | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-01-08 01:23:58.986894 | orchestrator | | 6619789cb2a245d5b2d520b0a0865a71 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-01-08 01:23:58.986900 | orchestrator | | 6ce70f5fd518478da27ba98d6f41f98e | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-01-08 01:23:58.986906 | orchestrator | | 7afffed4f8af40cc9126b9c0ac14cfa9 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-01-08 01:23:58.986913 | orchestrator | | 84c4e23ed3fc43e3a9af4e6e2eeee42c | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-01-08 01:23:58.986919 | orchestrator | | 8726f16820f140ff95a5c6ff60e59e7d | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-08 01:23:58.986947 | orchestrator | | 8b80aea5c4544e289b967034528150c5 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-01-08 01:23:58.986953 | orchestrator | | 90d638ffb58e45d1a147f08699091e63 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-01-08 01:23:58.986960 | orchestrator | | 93f52ebf7cf544b9bb9fc0bb7a556b2a | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-01-08 01:23:58.986966 | orchestrator | | a810f428341b4551a8758cd7c77c49c3 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-01-08 01:23:58.986972 | orchestrator | | a8d4b3fda3c7446399aeaa88b055a03c | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-01-08 01:23:58.987011 | orchestrator | | c4ec0f6f8d7e4fc4b2b495081bff9ae7 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-01-08 01:23:58.987017 | orchestrator | | ca42fca37ea5412d9e91f12b2befcf48 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-08 01:23:58.987023 | orchestrator | | de83f6a518864bb3b40de3d15b7dd4d1 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-01-08 01:23:58.987029 | orchestrator | | fe6c658f4d0f450fb665b8558ac0894e | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-01-08 01:23:58.987035 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-08 01:23:59.233390 | orchestrator | 2026-01-08 01:23:59.233480 | orchestrator | # Cinder 2026-01-08 01:23:59.233490 | orchestrator | 2026-01-08 01:23:59.233497 | orchestrator | + echo 2026-01-08 01:23:59.233503 | orchestrator | + echo '# Cinder' 2026-01-08 01:23:59.233510 | orchestrator | + echo 2026-01-08 01:23:59.233517 | orchestrator | + openstack volume service list 2026-01-08 01:24:03.030630 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-08 01:24:03.030751 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-01-08 01:24:03.030770 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-08 01:24:03.030777 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-08T01:24:00.000000 | 2026-01-08 01:24:03.030784 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-08T01:24:00.000000 | 2026-01-08 01:24:03.030791 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-08T01:24:00.000000 | 2026-01-08 01:24:03.030798 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-01-08T01:23:59.000000 | 2026-01-08 01:24:03.030804 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-01-08T01:23:53.000000 | 2026-01-08 01:24:03.030811 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-01-08T01:23:54.000000 | 2026-01-08 01:24:03.030818 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-01-08T01:23:57.000000 | 2026-01-08 01:24:03.030824 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-01-08T01:23:58.000000 | 2026-01-08 01:24:03.030831 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-01-08T01:23:59.000000 | 2026-01-08 01:24:03.030838 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-08 01:24:03.309036 | orchestrator | 2026-01-08 01:24:03.309115 | orchestrator | # Neutron 2026-01-08 01:24:03.309125 | orchestrator | 2026-01-08 01:24:03.309132 | orchestrator | + echo 2026-01-08 01:24:03.309139 | orchestrator | + echo '# Neutron' 2026-01-08 01:24:03.309147 | orchestrator | + echo 2026-01-08 01:24:03.309152 | orchestrator | + openstack network agent list 2026-01-08 01:24:06.087940 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-08 01:24:06.088033 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-01-08 01:24:06.088044 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-08 01:24:06.088050 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-01-08 01:24:06.088057 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-01-08 01:24:06.088081 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-01-08 01:24:06.088087 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-01-08 01:24:06.088093 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-01-08 01:24:06.088115 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-01-08 01:24:06.088123 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-08 01:24:06.088130 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-08 01:24:06.088136 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-08 01:24:06.088143 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-08 01:24:06.345932 | orchestrator | + openstack network service provider list 2026-01-08 01:24:08.909582 | orchestrator | +---------------+------+---------+ 2026-01-08 01:24:08.909710 | orchestrator | | Service Type | Name | Default | 2026-01-08 01:24:08.909724 | orchestrator | +---------------+------+---------+ 2026-01-08 01:24:08.909730 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-01-08 01:24:08.909736 | orchestrator | +---------------+------+---------+ 2026-01-08 01:24:09.180284 | orchestrator | 2026-01-08 01:24:09.180362 | orchestrator | # Nova 2026-01-08 01:24:09.180368 | orchestrator | 2026-01-08 01:24:09.180373 | orchestrator | + echo 2026-01-08 01:24:09.180377 | orchestrator | + echo '# Nova' 2026-01-08 01:24:09.180382 | orchestrator | + echo 2026-01-08 01:24:09.180386 | orchestrator | + openstack compute service list 2026-01-08 01:24:11.963800 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-08 01:24:11.963874 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-01-08 01:24:11.963882 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-08 01:24:11.963887 | orchestrator | | 61094a57-2e70-4f22-8bcb-a24d4199e202 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-08T01:24:09.000000 | 2026-01-08 01:24:11.963891 | orchestrator | | d0213a17-1cf7-4d03-bbc5-4bedb7edec1c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-08T01:24:10.000000 | 2026-01-08 01:24:11.963914 | orchestrator | | 5531b950-4845-4be4-bb7d-e0a020bbbab3 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-08T01:24:10.000000 | 2026-01-08 01:24:11.963919 | orchestrator | | 724c7c11-1448-4d73-b001-69e78a6c69a6 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-01-08T01:24:10.000000 | 2026-01-08 01:24:11.963923 | orchestrator | | 9840d94b-05db-4fc3-977d-87ddc028f47c | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-01-08T01:24:02.000000 | 2026-01-08 01:24:11.963927 | orchestrator | | cfdb048d-a34a-41a6-b18c-c54531c881f5 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-01-08T01:24:02.000000 | 2026-01-08 01:24:11.963930 | orchestrator | | f650ef53-a66d-42ca-a903-0196e8af2869 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-01-08T01:24:03.000000 | 2026-01-08 01:24:11.963934 | orchestrator | | 45ae2e20-02f4-4e9f-940f-2e4ba3638475 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-01-08T01:24:03.000000 | 2026-01-08 01:24:11.963938 | orchestrator | | c2676281-8064-4bbb-96e8-0b5462f1f930 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-01-08T01:24:04.000000 | 2026-01-08 01:24:11.963942 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-08 01:24:12.251831 | orchestrator | + openstack hypervisor list 2026-01-08 01:24:15.447079 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-08 01:24:15.447158 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-01-08 01:24:15.447164 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-08 01:24:15.447169 | orchestrator | | 772cffcb-d2b0-4da0-95fb-fc3635287a4a | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-01-08 01:24:15.447173 | orchestrator | | ce6d9c04-ae87-4eda-897f-c695759169c4 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-01-08 01:24:15.447177 | orchestrator | | 1fce6e1d-5c68-4a8d-b9c1-89fd735dac2f | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-01-08 01:24:15.447181 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-08 01:24:15.725318 | orchestrator | 2026-01-08 01:24:15.725415 | orchestrator | # Run OpenStack test play 2026-01-08 01:24:15.725428 | orchestrator | + echo 2026-01-08 01:24:15.725435 | orchestrator | + echo '# Run OpenStack test play' 2026-01-08 01:24:15.725453 | orchestrator | 2026-01-08 01:24:15.725459 | orchestrator | + echo 2026-01-08 01:24:15.725465 | orchestrator | + osism apply --environment openstack test 2026-01-08 01:24:17.769516 | orchestrator | 2026-01-08 01:24:17 | INFO  | Trying to run play test in environment openstack 2026-01-08 01:24:27.860215 | orchestrator | 2026-01-08 01:24:27 | INFO  | Task c877e5ff-edc9-4045-ba11-0e8cf8c57ca1 (test) was prepared for execution. 2026-01-08 01:24:27.860268 | orchestrator | 2026-01-08 01:24:27 | INFO  | It takes a moment until task c877e5ff-edc9-4045-ba11-0e8cf8c57ca1 (test) has been started and output is visible here. 2026-01-08 01:31:29.282222 | orchestrator | 2026-01-08 01:31:29.282722 | orchestrator | PLAY [Create test project] ***************************************************** 2026-01-08 01:31:29.282743 | orchestrator | 2026-01-08 01:31:29.282760 | orchestrator | TASK [Create test domain] ****************************************************** 2026-01-08 01:31:29.282767 | orchestrator | Thursday 08 January 2026 01:24:32 +0000 (0:00:00.072) 0:00:00.072 ****** 2026-01-08 01:31:29.282773 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.282780 | orchestrator | 2026-01-08 01:31:29.282787 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-01-08 01:31:29.282794 | orchestrator | Thursday 08 January 2026 01:24:35 +0000 (0:00:03.719) 0:00:03.792 ****** 2026-01-08 01:31:29.282801 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.282807 | orchestrator | 2026-01-08 01:31:29.282814 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-01-08 01:31:29.282821 | orchestrator | Thursday 08 January 2026 01:24:40 +0000 (0:00:04.163) 0:00:07.955 ****** 2026-01-08 01:31:29.282841 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.282847 | orchestrator | 2026-01-08 01:31:29.282853 | orchestrator | TASK [Create test project] ***************************************************** 2026-01-08 01:31:29.282859 | orchestrator | Thursday 08 January 2026 01:24:46 +0000 (0:00:06.698) 0:00:14.654 ****** 2026-01-08 01:31:29.282898 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.282910 | orchestrator | 2026-01-08 01:31:29.282916 | orchestrator | TASK [Create test user] ******************************************************** 2026-01-08 01:31:29.282922 | orchestrator | Thursday 08 January 2026 01:24:50 +0000 (0:00:04.173) 0:00:18.828 ****** 2026-01-08 01:31:29.282929 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.282935 | orchestrator | 2026-01-08 01:31:29.282941 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-01-08 01:31:29.282947 | orchestrator | Thursday 08 January 2026 01:24:55 +0000 (0:00:04.152) 0:00:22.980 ****** 2026-01-08 01:31:29.282953 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-01-08 01:31:29.282958 | orchestrator | changed: [localhost] => (item=member) 2026-01-08 01:31:29.282964 | orchestrator | changed: [localhost] => (item=creator) 2026-01-08 01:31:29.282970 | orchestrator | 2026-01-08 01:31:29.282977 | orchestrator | TASK [Create test server group] ************************************************ 2026-01-08 01:31:29.282984 | orchestrator | Thursday 08 January 2026 01:25:06 +0000 (0:00:11.526) 0:00:34.507 ****** 2026-01-08 01:31:29.282990 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.282997 | orchestrator | 2026-01-08 01:31:29.283001 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-01-08 01:31:29.283006 | orchestrator | Thursday 08 January 2026 01:25:11 +0000 (0:00:04.378) 0:00:38.885 ****** 2026-01-08 01:31:29.283010 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283015 | orchestrator | 2026-01-08 01:31:29.283019 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-01-08 01:31:29.283024 | orchestrator | Thursday 08 January 2026 01:25:16 +0000 (0:00:05.116) 0:00:44.002 ****** 2026-01-08 01:31:29.283028 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283032 | orchestrator | 2026-01-08 01:31:29.283037 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-01-08 01:31:29.283041 | orchestrator | Thursday 08 January 2026 01:25:20 +0000 (0:00:04.115) 0:00:48.118 ****** 2026-01-08 01:31:29.283046 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283050 | orchestrator | 2026-01-08 01:31:29.283055 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-01-08 01:31:29.283059 | orchestrator | Thursday 08 January 2026 01:25:24 +0000 (0:00:03.950) 0:00:52.069 ****** 2026-01-08 01:31:29.283063 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283068 | orchestrator | 2026-01-08 01:31:29.283097 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-01-08 01:31:29.283102 | orchestrator | Thursday 08 January 2026 01:25:27 +0000 (0:00:03.762) 0:00:55.831 ****** 2026-01-08 01:31:29.283106 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283110 | orchestrator | 2026-01-08 01:31:29.283114 | orchestrator | TASK [Create test network] ***************************************************** 2026-01-08 01:31:29.283119 | orchestrator | Thursday 08 January 2026 01:25:31 +0000 (0:00:03.845) 0:00:59.676 ****** 2026-01-08 01:31:29.283123 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283128 | orchestrator | 2026-01-08 01:31:29.283132 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-01-08 01:31:29.283136 | orchestrator | Thursday 08 January 2026 01:25:37 +0000 (0:00:05.372) 0:01:05.049 ****** 2026-01-08 01:31:29.283141 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283145 | orchestrator | 2026-01-08 01:31:29.283149 | orchestrator | TASK [Create test router] ****************************************************** 2026-01-08 01:31:29.283154 | orchestrator | Thursday 08 January 2026 01:25:41 +0000 (0:00:04.665) 0:01:09.714 ****** 2026-01-08 01:31:29.283158 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283162 | orchestrator | 2026-01-08 01:31:29.283166 | orchestrator | TASK [Create test instances] *************************************************** 2026-01-08 01:31:29.283176 | orchestrator | Thursday 08 January 2026 01:25:53 +0000 (0:00:11.241) 0:01:20.956 ****** 2026-01-08 01:31:29.283180 | orchestrator | changed: [localhost] => (item=test) 2026-01-08 01:31:29.283185 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-08 01:31:29.283189 | orchestrator | 2026-01-08 01:31:29.283193 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-08 01:31:29.283198 | orchestrator | 2026-01-08 01:31:29.283202 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-08 01:31:29.283206 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-08 01:31:29.283210 | orchestrator | 2026-01-08 01:31:29.283215 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-08 01:31:29.283219 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-08 01:31:29.283224 | orchestrator | 2026-01-08 01:31:29.283228 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-08 01:31:29.283233 | orchestrator | 2026-01-08 01:31:29.283237 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2026-01-08 01:31:29.283241 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-08 01:31:29.283246 | orchestrator | 2026-01-08 01:31:29.283250 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-01-08 01:31:29.283268 | orchestrator | Thursday 08 January 2026 01:30:06 +0000 (0:04:13.036) 0:05:33.993 ****** 2026-01-08 01:31:29.283273 | orchestrator | changed: [localhost] => (item=test) 2026-01-08 01:31:29.283282 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-08 01:31:29.283286 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-08 01:31:29.283291 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-08 01:31:29.283295 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-08 01:31:29.283299 | orchestrator | 2026-01-08 01:31:29.283304 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-01-08 01:31:29.283308 | orchestrator | Thursday 08 January 2026 01:30:29 +0000 (0:00:22.996) 0:05:56.989 ****** 2026-01-08 01:31:29.283313 | orchestrator | changed: [localhost] => (item=test) 2026-01-08 01:31:29.283317 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-08 01:31:29.283321 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-08 01:31:29.283325 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-08 01:31:29.283330 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-08 01:31:29.283334 | orchestrator | 2026-01-08 01:31:29.283338 | orchestrator | TASK [Create test volume] ****************************************************** 2026-01-08 01:31:29.283342 | orchestrator | Thursday 08 January 2026 01:31:03 +0000 (0:00:34.066) 0:06:31.056 ****** 2026-01-08 01:31:29.283347 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283351 | orchestrator | 2026-01-08 01:31:29.283355 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-01-08 01:31:29.283362 | orchestrator | Thursday 08 January 2026 01:31:09 +0000 (0:00:06.099) 0:06:37.155 ****** 2026-01-08 01:31:29.283368 | orchestrator | changed: [localhost] 2026-01-08 01:31:29.283374 | orchestrator | 2026-01-08 01:31:29.283380 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-01-08 01:31:29.283387 | orchestrator | Thursday 08 January 2026 01:31:23 +0000 (0:00:14.188) 0:06:51.344 ****** 2026-01-08 01:31:29.283393 | orchestrator | ok: [localhost] 2026-01-08 01:31:29.283400 | orchestrator | 2026-01-08 01:31:29.283406 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-01-08 01:31:29.283413 | orchestrator | Thursday 08 January 2026 01:31:28 +0000 (0:00:05.485) 0:06:56.830 ****** 2026-01-08 01:31:29.283419 | orchestrator | ok: [localhost] => { 2026-01-08 01:31:29.283426 | orchestrator |  "msg": "192.168.112.168" 2026-01-08 01:31:29.283433 | orchestrator | } 2026-01-08 01:31:29.283439 | orchestrator | 2026-01-08 01:31:29.283446 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:31:29.283452 | orchestrator | localhost : ok=22  changed=20  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-08 01:31:29.283465 | orchestrator | 2026-01-08 01:31:29.283473 | orchestrator | 2026-01-08 01:31:29.283477 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:31:29.283482 | orchestrator | Thursday 08 January 2026 01:31:28 +0000 (0:00:00.044) 0:06:56.875 ****** 2026-01-08 01:31:29.283486 | orchestrator | =============================================================================== 2026-01-08 01:31:29.283491 | orchestrator | Create test instances ------------------------------------------------- 253.04s 2026-01-08 01:31:29.283495 | orchestrator | Add tag to instances --------------------------------------------------- 34.07s 2026-01-08 01:31:29.283500 | orchestrator | Add metadata to instances ---------------------------------------------- 23.00s 2026-01-08 01:31:29.283504 | orchestrator | Attach test volume ----------------------------------------------------- 14.19s 2026-01-08 01:31:29.283508 | orchestrator | Add member roles to user test ------------------------------------------ 11.53s 2026-01-08 01:31:29.283513 | orchestrator | Create test router ----------------------------------------------------- 11.24s 2026-01-08 01:31:29.283517 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.70s 2026-01-08 01:31:29.283521 | orchestrator | Create test volume ------------------------------------------------------ 6.10s 2026-01-08 01:31:29.283526 | orchestrator | Create floating ip address ---------------------------------------------- 5.49s 2026-01-08 01:31:29.283530 | orchestrator | Create test network ----------------------------------------------------- 5.37s 2026-01-08 01:31:29.283534 | orchestrator | Create ssh security group ----------------------------------------------- 5.12s 2026-01-08 01:31:29.283539 | orchestrator | Create test subnet ------------------------------------------------------ 4.67s 2026-01-08 01:31:29.283543 | orchestrator | Create test server group ------------------------------------------------ 4.38s 2026-01-08 01:31:29.283548 | orchestrator | Create test project ----------------------------------------------------- 4.17s 2026-01-08 01:31:29.283553 | orchestrator | Create test-admin user -------------------------------------------------- 4.16s 2026-01-08 01:31:29.283557 | orchestrator | Create test user -------------------------------------------------------- 4.15s 2026-01-08 01:31:29.283562 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.12s 2026-01-08 01:31:29.283566 | orchestrator | Create icmp security group ---------------------------------------------- 3.95s 2026-01-08 01:31:29.283571 | orchestrator | Create test keypair ----------------------------------------------------- 3.85s 2026-01-08 01:31:29.283575 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.76s 2026-01-08 01:31:29.602285 | orchestrator | + server_list 2026-01-08 01:31:29.602339 | orchestrator | + openstack --os-cloud test server list 2026-01-08 01:31:33.096063 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-08 01:31:33.096153 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-01-08 01:31:33.096159 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-08 01:31:33.096163 | orchestrator | | 2361b99c-6d34-42a4-903b-11e08a07e452 | test-4 | ACTIVE | test=192.168.112.190, 192.168.200.193 | N/A (booted from volume) | SCS-1L-1 | 2026-01-08 01:31:33.096167 | orchestrator | | 61f611cc-0134-453e-bbfc-1d0713848f0b | test-3 | ACTIVE | test=192.168.112.127, 192.168.200.18 | N/A (booted from volume) | SCS-1L-1 | 2026-01-08 01:31:33.096171 | orchestrator | | a01026d0-34d8-4455-b6d6-a5102184e753 | test-2 | ACTIVE | test=192.168.112.196, 192.168.200.150 | N/A (booted from volume) | SCS-1L-1 | 2026-01-08 01:31:33.096175 | orchestrator | | 7d17376b-3cb5-4f03-8009-32424d92adfc | test-1 | ACTIVE | test=192.168.112.158, 192.168.200.133 | N/A (booted from volume) | SCS-1L-1 | 2026-01-08 01:31:33.096179 | orchestrator | | 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 | test | ACTIVE | test=192.168.112.168, 192.168.200.82 | N/A (booted from volume) | SCS-1L-1 | 2026-01-08 01:31:33.096196 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-08 01:31:33.378465 | orchestrator | + openstack --os-cloud test server show test 2026-01-08 01:31:36.356773 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:36.356839 | orchestrator | | Field | Value | 2026-01-08 01:31:36.356849 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:36.356855 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-08 01:31:36.356861 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-08 01:31:36.356867 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-08 01:31:36.356876 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-01-08 01:31:36.356883 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-08 01:31:36.356890 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-08 01:31:36.356916 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-08 01:31:36.356922 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-08 01:31:36.356928 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-08 01:31:36.356933 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-08 01:31:36.356938 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-08 01:31:36.356945 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-08 01:31:36.356952 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-08 01:31:36.356957 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-08 01:31:36.356962 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-08 01:31:36.356974 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-08T01:26:38.000000 | 2026-01-08 01:31:36.356983 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-08 01:31:36.356988 | orchestrator | | accessIPv4 | | 2026-01-08 01:31:36.356994 | orchestrator | | accessIPv6 | | 2026-01-08 01:31:36.356998 | orchestrator | | addresses | test=192.168.112.168, 192.168.200.82 | 2026-01-08 01:31:36.357002 | orchestrator | | config_drive | | 2026-01-08 01:31:36.357005 | orchestrator | | created | 2026-01-08T01:26:01Z | 2026-01-08 01:31:36.357008 | orchestrator | | description | None | 2026-01-08 01:31:36.357011 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-08 01:31:36.357016 | orchestrator | | hostId | 39162a20f65ed8a87d2baeb89101bc059dd8b555607e48e82001352d | 2026-01-08 01:31:36.357022 | orchestrator | | host_status | None | 2026-01-08 01:31:36.357028 | orchestrator | | id | 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 | 2026-01-08 01:31:36.357031 | orchestrator | | image | N/A (booted from volume) | 2026-01-08 01:31:36.357035 | orchestrator | | key_name | test | 2026-01-08 01:31:36.357038 | orchestrator | | locked | False | 2026-01-08 01:31:36.357041 | orchestrator | | locked_reason | None | 2026-01-08 01:31:36.357044 | orchestrator | | name | test | 2026-01-08 01:31:36.357048 | orchestrator | | pinned_availability_zone | None | 2026-01-08 01:31:36.357051 | orchestrator | | progress | 0 | 2026-01-08 01:31:36.357145 | orchestrator | | project_id | dc7aa91990b84152b7de2dd8df2a9074 | 2026-01-08 01:31:36.357157 | orchestrator | | properties | hostname='test' | 2026-01-08 01:31:36.357166 | orchestrator | | security_groups | name='ssh' | 2026-01-08 01:31:36.357171 | orchestrator | | | name='icmp' | 2026-01-08 01:31:36.357176 | orchestrator | | server_groups | None | 2026-01-08 01:31:36.357181 | orchestrator | | status | ACTIVE | 2026-01-08 01:31:36.357186 | orchestrator | | tags | test | 2026-01-08 01:31:36.357191 | orchestrator | | trusted_image_certificates | None | 2026-01-08 01:31:36.357196 | orchestrator | | updated | 2026-01-08T01:30:10Z | 2026-01-08 01:31:36.357210 | orchestrator | | user_id | d8bf712fc7e94c81905cf5985fb8fea0 | 2026-01-08 01:31:36.357219 | orchestrator | | volumes_attached | delete_on_termination='True', id='6e35b8a4-3cda-41a3-8fe0-ec93c69a17f6' | 2026-01-08 01:31:36.357224 | orchestrator | | | delete_on_termination='False', id='ce950a2e-515e-4583-bc8e-22224a54d768' | 2026-01-08 01:31:36.357459 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:36.618755 | orchestrator | + openstack --os-cloud test server show test-1 2026-01-08 01:31:39.369659 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:39.369707 | orchestrator | | Field | Value | 2026-01-08 01:31:39.369711 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:39.369715 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-08 01:31:39.369718 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-08 01:31:39.369731 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-08 01:31:39.369735 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-01-08 01:31:39.369738 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-08 01:31:39.369742 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-08 01:31:39.369760 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-08 01:31:39.369764 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-08 01:31:39.369768 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-08 01:31:39.369771 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-08 01:31:39.369774 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-08 01:31:39.369782 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-08 01:31:39.369786 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-08 01:31:39.369789 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-08 01:31:39.369794 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-08 01:31:39.369797 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-08T01:27:37.000000 | 2026-01-08 01:31:39.369803 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-08 01:31:39.369806 | orchestrator | | accessIPv4 | | 2026-01-08 01:31:39.369810 | orchestrator | | accessIPv6 | | 2026-01-08 01:31:39.369848 | orchestrator | | addresses | test=192.168.112.158, 192.168.200.133 | 2026-01-08 01:31:39.369852 | orchestrator | | config_drive | | 2026-01-08 01:31:39.369858 | orchestrator | | created | 2026-01-08T01:27:00Z | 2026-01-08 01:31:39.369861 | orchestrator | | description | None | 2026-01-08 01:31:39.369864 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-08 01:31:39.369869 | orchestrator | | hostId | 0e4b940fa5dd94b8f6db0175f1bead21ad387a3e5d16168a8c0becd7 | 2026-01-08 01:31:39.369873 | orchestrator | | host_status | None | 2026-01-08 01:31:39.369879 | orchestrator | | id | 7d17376b-3cb5-4f03-8009-32424d92adfc | 2026-01-08 01:31:39.369882 | orchestrator | | image | N/A (booted from volume) | 2026-01-08 01:31:39.369888 | orchestrator | | key_name | test | 2026-01-08 01:31:39.369893 | orchestrator | | locked | False | 2026-01-08 01:31:39.369903 | orchestrator | | locked_reason | None | 2026-01-08 01:31:39.369909 | orchestrator | | name | test-1 | 2026-01-08 01:31:39.369914 | orchestrator | | pinned_availability_zone | None | 2026-01-08 01:31:39.369919 | orchestrator | | progress | 0 | 2026-01-08 01:31:39.369930 | orchestrator | | project_id | dc7aa91990b84152b7de2dd8df2a9074 | 2026-01-08 01:31:39.369935 | orchestrator | | properties | hostname='test-1' | 2026-01-08 01:31:39.369944 | orchestrator | | security_groups | name='ssh' | 2026-01-08 01:31:39.369950 | orchestrator | | | name='icmp' | 2026-01-08 01:31:39.369955 | orchestrator | | server_groups | None | 2026-01-08 01:31:39.369964 | orchestrator | | status | ACTIVE | 2026-01-08 01:31:39.369970 | orchestrator | | tags | test | 2026-01-08 01:31:39.369976 | orchestrator | | trusted_image_certificates | None | 2026-01-08 01:31:39.369992 | orchestrator | | updated | 2026-01-08T01:30:15Z | 2026-01-08 01:31:39.370049 | orchestrator | | user_id | d8bf712fc7e94c81905cf5985fb8fea0 | 2026-01-08 01:31:39.370105 | orchestrator | | volumes_attached | delete_on_termination='True', id='ce39c1c4-8dbb-4460-9903-f7097a669042' | 2026-01-08 01:31:39.372080 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:39.616153 | orchestrator | + openstack --os-cloud test server show test-2 2026-01-08 01:31:42.393471 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:42.393553 | orchestrator | | Field | Value | 2026-01-08 01:31:42.393586 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:42.393597 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-08 01:31:42.393608 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-08 01:31:42.393618 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-08 01:31:42.393628 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-01-08 01:31:42.393649 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-08 01:31:42.393660 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-08 01:31:42.393684 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-08 01:31:42.393695 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-08 01:31:42.393712 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-08 01:31:42.393722 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-08 01:31:42.393732 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-08 01:31:42.393742 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-08 01:31:42.393752 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-08 01:31:42.393762 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-08 01:31:42.393772 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-08 01:31:42.393782 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-08T01:28:31.000000 | 2026-01-08 01:31:42.393811 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-08 01:31:42.393822 | orchestrator | | accessIPv4 | | 2026-01-08 01:31:42.393838 | orchestrator | | accessIPv6 | | 2026-01-08 01:31:42.393848 | orchestrator | | addresses | test=192.168.112.196, 192.168.200.150 | 2026-01-08 01:31:42.393858 | orchestrator | | config_drive | | 2026-01-08 01:31:42.393868 | orchestrator | | created | 2026-01-08T01:27:55Z | 2026-01-08 01:31:42.393878 | orchestrator | | description | None | 2026-01-08 01:31:42.393888 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-08 01:31:42.393901 | orchestrator | | hostId | aa527ebdd60283284d14ff6594f02f18a34f997dbdf679020f557d9f | 2026-01-08 01:31:42.393911 | orchestrator | | host_status | None | 2026-01-08 01:31:42.393927 | orchestrator | | id | a01026d0-34d8-4455-b6d6-a5102184e753 | 2026-01-08 01:31:42.393943 | orchestrator | | image | N/A (booted from volume) | 2026-01-08 01:31:42.393953 | orchestrator | | key_name | test | 2026-01-08 01:31:42.393962 | orchestrator | | locked | False | 2026-01-08 01:31:42.393973 | orchestrator | | locked_reason | None | 2026-01-08 01:31:42.393982 | orchestrator | | name | test-2 | 2026-01-08 01:31:42.393992 | orchestrator | | pinned_availability_zone | None | 2026-01-08 01:31:42.394002 | orchestrator | | progress | 0 | 2026-01-08 01:31:42.394048 | orchestrator | | project_id | dc7aa91990b84152b7de2dd8df2a9074 | 2026-01-08 01:31:42.394083 | orchestrator | | properties | hostname='test-2' | 2026-01-08 01:31:42.394109 | orchestrator | | security_groups | name='ssh' | 2026-01-08 01:31:42.394121 | orchestrator | | | name='icmp' | 2026-01-08 01:31:42.394133 | orchestrator | | server_groups | None | 2026-01-08 01:31:42.394145 | orchestrator | | status | ACTIVE | 2026-01-08 01:31:42.394157 | orchestrator | | tags | test | 2026-01-08 01:31:42.394168 | orchestrator | | trusted_image_certificates | None | 2026-01-08 01:31:42.394180 | orchestrator | | updated | 2026-01-08T01:30:20Z | 2026-01-08 01:31:42.394191 | orchestrator | | user_id | d8bf712fc7e94c81905cf5985fb8fea0 | 2026-01-08 01:31:42.394206 | orchestrator | | volumes_attached | delete_on_termination='True', id='60faa50b-f38e-408b-830b-aec8aefd31b9' | 2026-01-08 01:31:42.397238 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:42.656952 | orchestrator | + openstack --os-cloud test server show test-3 2026-01-08 01:31:45.552011 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:45.552123 | orchestrator | | Field | Value | 2026-01-08 01:31:45.552134 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:45.552140 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-08 01:31:45.552144 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-08 01:31:45.552148 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-08 01:31:45.552152 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-01-08 01:31:45.552156 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-08 01:31:45.552168 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-08 01:31:45.552198 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-08 01:31:45.552203 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-08 01:31:45.552207 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-08 01:31:45.552210 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-08 01:31:45.552214 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-08 01:31:45.552221 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-08 01:31:45.552227 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-08 01:31:45.552233 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-08 01:31:45.552239 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-08 01:31:45.552253 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-08T01:29:13.000000 | 2026-01-08 01:31:45.552264 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-08 01:31:45.552270 | orchestrator | | accessIPv4 | | 2026-01-08 01:31:45.552276 | orchestrator | | accessIPv6 | | 2026-01-08 01:31:45.552282 | orchestrator | | addresses | test=192.168.112.127, 192.168.200.18 | 2026-01-08 01:31:45.552289 | orchestrator | | config_drive | | 2026-01-08 01:31:45.552295 | orchestrator | | created | 2026-01-08T01:28:48Z | 2026-01-08 01:31:45.552301 | orchestrator | | description | None | 2026-01-08 01:31:45.552307 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-08 01:31:45.552318 | orchestrator | | hostId | 0e4b940fa5dd94b8f6db0175f1bead21ad387a3e5d16168a8c0becd7 | 2026-01-08 01:31:45.552325 | orchestrator | | host_status | None | 2026-01-08 01:31:45.552337 | orchestrator | | id | 61f611cc-0134-453e-bbfc-1d0713848f0b | 2026-01-08 01:31:45.552344 | orchestrator | | image | N/A (booted from volume) | 2026-01-08 01:31:45.552351 | orchestrator | | key_name | test | 2026-01-08 01:31:45.552357 | orchestrator | | locked | False | 2026-01-08 01:31:45.552364 | orchestrator | | locked_reason | None | 2026-01-08 01:31:45.552371 | orchestrator | | name | test-3 | 2026-01-08 01:31:45.552378 | orchestrator | | pinned_availability_zone | None | 2026-01-08 01:31:45.552389 | orchestrator | | progress | 0 | 2026-01-08 01:31:45.552641 | orchestrator | | project_id | dc7aa91990b84152b7de2dd8df2a9074 | 2026-01-08 01:31:45.552662 | orchestrator | | properties | hostname='test-3' | 2026-01-08 01:31:45.552678 | orchestrator | | security_groups | name='ssh' | 2026-01-08 01:31:45.552685 | orchestrator | | | name='icmp' | 2026-01-08 01:31:45.552691 | orchestrator | | server_groups | None | 2026-01-08 01:31:45.552698 | orchestrator | | status | ACTIVE | 2026-01-08 01:31:45.552705 | orchestrator | | tags | test | 2026-01-08 01:31:45.552711 | orchestrator | | trusted_image_certificates | None | 2026-01-08 01:31:45.552729 | orchestrator | | updated | 2026-01-08T01:30:24Z | 2026-01-08 01:31:45.552737 | orchestrator | | user_id | d8bf712fc7e94c81905cf5985fb8fea0 | 2026-01-08 01:31:45.552744 | orchestrator | | volumes_attached | delete_on_termination='True', id='4ebfdaa2-07f1-4928-8d29-6966fc525dc0' | 2026-01-08 01:31:45.556970 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:45.807576 | orchestrator | + openstack --os-cloud test server show test-4 2026-01-08 01:31:48.629937 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:48.630084 | orchestrator | | Field | Value | 2026-01-08 01:31:48.630099 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:48.630107 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-08 01:31:48.630114 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-08 01:31:48.630134 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-08 01:31:48.630159 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-01-08 01:31:48.630165 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-08 01:31:48.630172 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-08 01:31:48.630195 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-08 01:31:48.630202 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-08 01:31:48.630209 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-08 01:31:48.630215 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-08 01:31:48.630222 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-08 01:31:48.630228 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-08 01:31:48.630244 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-08 01:31:48.630248 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-08 01:31:48.630252 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-08 01:31:48.630256 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-08T01:29:54.000000 | 2026-01-08 01:31:48.630264 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-08 01:31:48.630268 | orchestrator | | accessIPv4 | | 2026-01-08 01:31:48.630272 | orchestrator | | accessIPv6 | | 2026-01-08 01:31:48.630276 | orchestrator | | addresses | test=192.168.112.190, 192.168.200.193 | 2026-01-08 01:31:48.630280 | orchestrator | | config_drive | | 2026-01-08 01:31:48.630287 | orchestrator | | created | 2026-01-08T01:29:30Z | 2026-01-08 01:31:48.630293 | orchestrator | | description | None | 2026-01-08 01:31:48.630297 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-08 01:31:48.630301 | orchestrator | | hostId | aa527ebdd60283284d14ff6594f02f18a34f997dbdf679020f557d9f | 2026-01-08 01:31:48.630305 | orchestrator | | host_status | None | 2026-01-08 01:31:48.630314 | orchestrator | | id | 2361b99c-6d34-42a4-903b-11e08a07e452 | 2026-01-08 01:31:48.630318 | orchestrator | | image | N/A (booted from volume) | 2026-01-08 01:31:48.630322 | orchestrator | | key_name | test | 2026-01-08 01:31:48.630326 | orchestrator | | locked | False | 2026-01-08 01:31:48.630334 | orchestrator | | locked_reason | None | 2026-01-08 01:31:48.630338 | orchestrator | | name | test-4 | 2026-01-08 01:31:48.630345 | orchestrator | | pinned_availability_zone | None | 2026-01-08 01:31:48.630349 | orchestrator | | progress | 0 | 2026-01-08 01:31:48.630353 | orchestrator | | project_id | dc7aa91990b84152b7de2dd8df2a9074 | 2026-01-08 01:31:48.630357 | orchestrator | | properties | hostname='test-4' | 2026-01-08 01:31:48.630365 | orchestrator | | security_groups | name='ssh' | 2026-01-08 01:31:48.630369 | orchestrator | | | name='icmp' | 2026-01-08 01:31:48.630373 | orchestrator | | server_groups | None | 2026-01-08 01:31:48.630377 | orchestrator | | status | ACTIVE | 2026-01-08 01:31:48.630388 | orchestrator | | tags | test | 2026-01-08 01:31:48.630392 | orchestrator | | trusted_image_certificates | None | 2026-01-08 01:31:48.630398 | orchestrator | | updated | 2026-01-08T01:30:28Z | 2026-01-08 01:31:48.630402 | orchestrator | | user_id | d8bf712fc7e94c81905cf5985fb8fea0 | 2026-01-08 01:31:48.630406 | orchestrator | | volumes_attached | delete_on_termination='True', id='1c73133b-2e35-4e1a-877b-ec7bf0c3579e' | 2026-01-08 01:31:48.635761 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-08 01:31:48.896014 | orchestrator | + server_ping 2026-01-08 01:31:48.898286 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-08 01:31:48.898379 | orchestrator | ++ tr -d '\r' 2026-01-08 01:31:51.715948 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:31:51.716033 | orchestrator | + ping -c3 192.168.112.158 2026-01-08 01:31:51.734127 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-01-08 01:31:51.734201 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=9.04 ms 2026-01-08 01:31:52.729324 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.48 ms 2026-01-08 01:31:53.729948 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.81 ms 2026-01-08 01:31:53.730074 | orchestrator | 2026-01-08 01:31:53.730083 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-01-08 01:31:53.730091 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:31:53.730096 | orchestrator | rtt min/avg/max/mdev = 1.805/4.442/9.040/3.262 ms 2026-01-08 01:31:53.730546 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:31:53.730560 | orchestrator | + ping -c3 192.168.112.127 2026-01-08 01:31:53.742686 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-08 01:31:53.742771 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.23 ms 2026-01-08 01:31:54.739850 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.65 ms 2026-01-08 01:31:55.741992 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.70 ms 2026-01-08 01:31:55.742137 | orchestrator | 2026-01-08 01:31:55.742148 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-08 01:31:55.742157 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:31:55.742163 | orchestrator | rtt min/avg/max/mdev = 1.699/3.857/7.227/2.413 ms 2026-01-08 01:31:55.742170 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:31:55.742176 | orchestrator | + ping -c3 192.168.112.168 2026-01-08 01:31:55.755184 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-01-08 01:31:55.755257 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=8.89 ms 2026-01-08 01:31:56.749845 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.38 ms 2026-01-08 01:31:57.751190 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.74 ms 2026-01-08 01:31:57.751270 | orchestrator | 2026-01-08 01:31:57.751279 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-01-08 01:31:57.751286 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:31:57.751293 | orchestrator | rtt min/avg/max/mdev = 1.738/4.334/8.887/3.230 ms 2026-01-08 01:31:57.751997 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:31:57.752017 | orchestrator | + ping -c3 192.168.112.196 2026-01-08 01:31:57.761933 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-01-08 01:31:57.762079 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=5.44 ms 2026-01-08 01:31:58.759914 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=2.58 ms 2026-01-08 01:31:59.760635 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=1.82 ms 2026-01-08 01:31:59.760749 | orchestrator | 2026-01-08 01:31:59.760769 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-01-08 01:31:59.760785 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-08 01:31:59.760799 | orchestrator | rtt min/avg/max/mdev = 1.824/3.279/5.437/1.556 ms 2026-01-08 01:31:59.761209 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:31:59.761254 | orchestrator | + ping -c3 192.168.112.190 2026-01-08 01:31:59.773865 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2026-01-08 01:31:59.773951 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=7.91 ms 2026-01-08 01:32:00.768852 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.31 ms 2026-01-08 01:32:01.770308 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=2.02 ms 2026-01-08 01:32:01.770379 | orchestrator | 2026-01-08 01:32:01.770385 | orchestrator | --- 192.168.112.190 ping statistics --- 2026-01-08 01:32:01.770392 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-08 01:32:01.770397 | orchestrator | rtt min/avg/max/mdev = 2.018/4.079/7.908/2.709 ms 2026-01-08 01:32:01.771298 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-01-08 01:32:01.771344 | orchestrator | + compute_list 2026-01-08 01:32:01.771351 | orchestrator | + osism manage compute list testbed-node-3 2026-01-08 01:32:05.271554 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:32:05.271628 | orchestrator | | ID | Name | Status | 2026-01-08 01:32:05.271634 | orchestrator | |--------------------------------------+--------+----------| 2026-01-08 01:32:05.271639 | orchestrator | | 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 | test | ACTIVE | 2026-01-08 01:32:05.271643 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:32:05.663454 | orchestrator | + osism manage compute list testbed-node-4 2026-01-08 01:32:09.084997 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:32:09.085080 | orchestrator | | ID | Name | Status | 2026-01-08 01:32:09.085086 | orchestrator | |--------------------------------------+--------+----------| 2026-01-08 01:32:09.085090 | orchestrator | | 2361b99c-6d34-42a4-903b-11e08a07e452 | test-4 | ACTIVE | 2026-01-08 01:32:09.085094 | orchestrator | | a01026d0-34d8-4455-b6d6-a5102184e753 | test-2 | ACTIVE | 2026-01-08 01:32:09.085099 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:32:09.430362 | orchestrator | + osism manage compute list testbed-node-5 2026-01-08 01:32:13.027094 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:32:13.027178 | orchestrator | | ID | Name | Status | 2026-01-08 01:32:13.027185 | orchestrator | |--------------------------------------+--------+----------| 2026-01-08 01:32:13.027189 | orchestrator | | 61f611cc-0134-453e-bbfc-1d0713848f0b | test-3 | ACTIVE | 2026-01-08 01:32:13.027193 | orchestrator | | 7d17376b-3cb5-4f03-8009-32424d92adfc | test-1 | ACTIVE | 2026-01-08 01:32:13.027198 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:32:13.390432 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-01-08 01:32:16.821393 | orchestrator | 2026-01-08 01:32:16 | INFO  | Live migrating server 2361b99c-6d34-42a4-903b-11e08a07e452 2026-01-08 01:32:29.868301 | orchestrator | 2026-01-08 01:32:29 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:32:32.292162 | orchestrator | 2026-01-08 01:32:32 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:32:34.768704 | orchestrator | 2026-01-08 01:32:34 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:32:37.145557 | orchestrator | 2026-01-08 01:32:37 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:32:39.543930 | orchestrator | 2026-01-08 01:32:39 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:32:41.818282 | orchestrator | 2026-01-08 01:32:41 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:32:44.210918 | orchestrator | 2026-01-08 01:32:44 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:32:46.571170 | orchestrator | 2026-01-08 01:32:46 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:32:48.923719 | orchestrator | 2026-01-08 01:32:48 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) completed with status ACTIVE 2026-01-08 01:32:48.923777 | orchestrator | 2026-01-08 01:32:48 | INFO  | Live migrating server a01026d0-34d8-4455-b6d6-a5102184e753 2026-01-08 01:33:02.351747 | orchestrator | 2026-01-08 01:33:02 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:33:04.689438 | orchestrator | 2026-01-08 01:33:04 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:33:07.041395 | orchestrator | 2026-01-08 01:33:07 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:33:09.389028 | orchestrator | 2026-01-08 01:33:09 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:33:11.800886 | orchestrator | 2026-01-08 01:33:11 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:33:14.142087 | orchestrator | 2026-01-08 01:33:14 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:33:16.481532 | orchestrator | 2026-01-08 01:33:16 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:33:18.777817 | orchestrator | 2026-01-08 01:33:18 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:33:21.066084 | orchestrator | 2026-01-08 01:33:21 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) completed with status ACTIVE 2026-01-08 01:33:21.426223 | orchestrator | + compute_list 2026-01-08 01:33:21.426313 | orchestrator | + osism manage compute list testbed-node-3 2026-01-08 01:33:24.460870 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:33:24.460931 | orchestrator | | ID | Name | Status | 2026-01-08 01:33:24.460941 | orchestrator | |--------------------------------------+--------+----------| 2026-01-08 01:33:24.460948 | orchestrator | | 2361b99c-6d34-42a4-903b-11e08a07e452 | test-4 | ACTIVE | 2026-01-08 01:33:24.460953 | orchestrator | | a01026d0-34d8-4455-b6d6-a5102184e753 | test-2 | ACTIVE | 2026-01-08 01:33:24.460957 | orchestrator | | 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 | test | ACTIVE | 2026-01-08 01:33:24.460961 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:33:24.792039 | orchestrator | + osism manage compute list testbed-node-4 2026-01-08 01:33:27.768054 | orchestrator | +------+--------+----------+ 2026-01-08 01:33:27.768181 | orchestrator | | ID | Name | Status | 2026-01-08 01:33:27.768196 | orchestrator | |------+--------+----------| 2026-01-08 01:33:27.768202 | orchestrator | +------+--------+----------+ 2026-01-08 01:33:28.084465 | orchestrator | + osism manage compute list testbed-node-5 2026-01-08 01:33:31.147677 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:33:31.147766 | orchestrator | | ID | Name | Status | 2026-01-08 01:33:31.147776 | orchestrator | |--------------------------------------+--------+----------| 2026-01-08 01:33:31.147782 | orchestrator | | 61f611cc-0134-453e-bbfc-1d0713848f0b | test-3 | ACTIVE | 2026-01-08 01:33:31.147789 | orchestrator | | 7d17376b-3cb5-4f03-8009-32424d92adfc | test-1 | ACTIVE | 2026-01-08 01:33:31.147796 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:33:31.487142 | orchestrator | + server_ping 2026-01-08 01:33:31.488614 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-08 01:33:31.488661 | orchestrator | ++ tr -d '\r' 2026-01-08 01:33:34.593941 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:33:34.594080 | orchestrator | + ping -c3 192.168.112.158 2026-01-08 01:33:34.604815 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-01-08 01:33:34.604918 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=5.87 ms 2026-01-08 01:33:35.602188 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=1.88 ms 2026-01-08 01:33:36.604258 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=2.03 ms 2026-01-08 01:33:36.604346 | orchestrator | 2026-01-08 01:33:36.604353 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-01-08 01:33:36.604359 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-08 01:33:36.604365 | orchestrator | rtt min/avg/max/mdev = 1.878/3.258/5.872/1.848 ms 2026-01-08 01:33:36.604370 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:33:36.604375 | orchestrator | + ping -c3 192.168.112.127 2026-01-08 01:33:36.616825 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-08 01:33:36.616896 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.67 ms 2026-01-08 01:33:37.613570 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.23 ms 2026-01-08 01:33:38.614679 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.10 ms 2026-01-08 01:33:38.614758 | orchestrator | 2026-01-08 01:33:38.614768 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-08 01:33:38.614776 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:33:38.614782 | orchestrator | rtt min/avg/max/mdev = 2.100/3.999/7.673/2.597 ms 2026-01-08 01:33:38.615350 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:33:38.615390 | orchestrator | + ping -c3 192.168.112.168 2026-01-08 01:33:38.625323 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-01-08 01:33:38.625404 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=5.60 ms 2026-01-08 01:33:39.624231 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.49 ms 2026-01-08 01:33:40.625661 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.73 ms 2026-01-08 01:33:40.625764 | orchestrator | 2026-01-08 01:33:40.625776 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-01-08 01:33:40.625785 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-08 01:33:40.625792 | orchestrator | rtt min/avg/max/mdev = 1.734/3.273/5.595/1.670 ms 2026-01-08 01:33:40.626153 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:33:40.626178 | orchestrator | + ping -c3 192.168.112.196 2026-01-08 01:33:40.635658 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-01-08 01:33:40.635737 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=6.12 ms 2026-01-08 01:33:41.633714 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=2.33 ms 2026-01-08 01:33:42.634789 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=2.16 ms 2026-01-08 01:33:42.635010 | orchestrator | 2026-01-08 01:33:42.635027 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-01-08 01:33:42.635033 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:33:42.635037 | orchestrator | rtt min/avg/max/mdev = 2.159/3.536/6.117/1.826 ms 2026-01-08 01:33:42.635043 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:33:42.635047 | orchestrator | + ping -c3 192.168.112.190 2026-01-08 01:33:42.654395 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2026-01-08 01:33:42.654466 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=6.37 ms 2026-01-08 01:33:43.642507 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.57 ms 2026-01-08 01:33:44.643615 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=1.61 ms 2026-01-08 01:33:44.643685 | orchestrator | 2026-01-08 01:33:44.643705 | orchestrator | --- 192.168.112.190 ping statistics --- 2026-01-08 01:33:44.643711 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:33:44.643715 | orchestrator | rtt min/avg/max/mdev = 1.610/3.517/6.368/2.053 ms 2026-01-08 01:33:44.643720 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-01-08 01:33:47.998809 | orchestrator | 2026-01-08 01:33:47 | INFO  | Live migrating server 61f611cc-0134-453e-bbfc-1d0713848f0b 2026-01-08 01:34:00.689167 | orchestrator | 2026-01-08 01:34:00 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:34:03.058676 | orchestrator | 2026-01-08 01:34:03 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:34:05.348357 | orchestrator | 2026-01-08 01:34:05 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:34:07.690088 | orchestrator | 2026-01-08 01:34:07 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:34:10.021413 | orchestrator | 2026-01-08 01:34:10 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:34:12.318588 | orchestrator | 2026-01-08 01:34:12 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:34:14.685832 | orchestrator | 2026-01-08 01:34:14 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:34:16.965433 | orchestrator | 2026-01-08 01:34:16 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:34:19.288196 | orchestrator | 2026-01-08 01:34:19 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:34:21.580711 | orchestrator | 2026-01-08 01:34:21 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) completed with status ACTIVE 2026-01-08 01:34:21.580779 | orchestrator | 2026-01-08 01:34:21 | INFO  | Live migrating server 7d17376b-3cb5-4f03-8009-32424d92adfc 2026-01-08 01:34:34.898448 | orchestrator | 2026-01-08 01:34:34 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:34:37.304837 | orchestrator | 2026-01-08 01:34:37 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:34:39.622620 | orchestrator | 2026-01-08 01:34:39 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:34:41.925446 | orchestrator | 2026-01-08 01:34:41 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:34:44.238099 | orchestrator | 2026-01-08 01:34:44 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:34:46.640999 | orchestrator | 2026-01-08 01:34:46 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:34:49.067706 | orchestrator | 2026-01-08 01:34:49 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:34:51.356382 | orchestrator | 2026-01-08 01:34:51 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:34:53.698197 | orchestrator | 2026-01-08 01:34:53 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) completed with status ACTIVE 2026-01-08 01:34:54.040674 | orchestrator | + compute_list 2026-01-08 01:34:54.040761 | orchestrator | + osism manage compute list testbed-node-3 2026-01-08 01:34:57.607803 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:34:57.607950 | orchestrator | | ID | Name | Status | 2026-01-08 01:34:57.607964 | orchestrator | |--------------------------------------+--------+----------| 2026-01-08 01:34:57.607974 | orchestrator | | 2361b99c-6d34-42a4-903b-11e08a07e452 | test-4 | ACTIVE | 2026-01-08 01:34:57.607984 | orchestrator | | 61f611cc-0134-453e-bbfc-1d0713848f0b | test-3 | ACTIVE | 2026-01-08 01:34:57.607993 | orchestrator | | a01026d0-34d8-4455-b6d6-a5102184e753 | test-2 | ACTIVE | 2026-01-08 01:34:57.608002 | orchestrator | | 7d17376b-3cb5-4f03-8009-32424d92adfc | test-1 | ACTIVE | 2026-01-08 01:34:57.608012 | orchestrator | | 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 | test | ACTIVE | 2026-01-08 01:34:57.608021 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:34:57.926217 | orchestrator | + osism manage compute list testbed-node-4 2026-01-08 01:35:00.819329 | orchestrator | +------+--------+----------+ 2026-01-08 01:35:00.819449 | orchestrator | | ID | Name | Status | 2026-01-08 01:35:00.819469 | orchestrator | |------+--------+----------| 2026-01-08 01:35:00.819483 | orchestrator | +------+--------+----------+ 2026-01-08 01:35:01.180776 | orchestrator | + osism manage compute list testbed-node-5 2026-01-08 01:35:04.072220 | orchestrator | +------+--------+----------+ 2026-01-08 01:35:04.072324 | orchestrator | | ID | Name | Status | 2026-01-08 01:35:04.072334 | orchestrator | |------+--------+----------| 2026-01-08 01:35:04.072356 | orchestrator | +------+--------+----------+ 2026-01-08 01:35:04.403595 | orchestrator | + server_ping 2026-01-08 01:35:04.405327 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-08 01:35:04.405466 | orchestrator | ++ tr -d '\r' 2026-01-08 01:35:07.254406 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:35:07.254498 | orchestrator | + ping -c3 192.168.112.158 2026-01-08 01:35:07.265966 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-01-08 01:35:07.266096 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=7.92 ms 2026-01-08 01:35:08.261659 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.64 ms 2026-01-08 01:35:09.263403 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.70 ms 2026-01-08 01:35:09.263499 | orchestrator | 2026-01-08 01:35:09.263510 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-01-08 01:35:09.263518 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:35:09.263525 | orchestrator | rtt min/avg/max/mdev = 1.703/4.084/7.915/2.735 ms 2026-01-08 01:35:09.263533 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:35:09.263541 | orchestrator | + ping -c3 192.168.112.127 2026-01-08 01:35:09.274222 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-08 01:35:09.274335 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=6.54 ms 2026-01-08 01:35:10.271438 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.04 ms 2026-01-08 01:35:11.273320 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.63 ms 2026-01-08 01:35:11.273391 | orchestrator | 2026-01-08 01:35:11.273398 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-08 01:35:11.273404 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:35:11.273409 | orchestrator | rtt min/avg/max/mdev = 1.625/3.403/6.540/2.224 ms 2026-01-08 01:35:11.273414 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:35:11.273420 | orchestrator | + ping -c3 192.168.112.168 2026-01-08 01:35:11.284505 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-01-08 01:35:11.284579 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=6.49 ms 2026-01-08 01:35:12.281846 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=1.95 ms 2026-01-08 01:35:13.282102 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.57 ms 2026-01-08 01:35:13.282175 | orchestrator | 2026-01-08 01:35:13.282182 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-01-08 01:35:13.282188 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-08 01:35:13.282193 | orchestrator | rtt min/avg/max/mdev = 1.567/3.338/6.494/2.236 ms 2026-01-08 01:35:13.282421 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:35:13.282433 | orchestrator | + ping -c3 192.168.112.196 2026-01-08 01:35:13.293672 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-01-08 01:35:13.293745 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=7.66 ms 2026-01-08 01:35:14.290194 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=1.94 ms 2026-01-08 01:35:15.291336 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=1.81 ms 2026-01-08 01:35:15.291427 | orchestrator | 2026-01-08 01:35:15.291439 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-01-08 01:35:15.291448 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:35:15.291455 | orchestrator | rtt min/avg/max/mdev = 1.805/3.799/7.655/2.726 ms 2026-01-08 01:35:15.292027 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:35:15.292092 | orchestrator | + ping -c3 192.168.112.190 2026-01-08 01:35:15.303096 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2026-01-08 01:35:15.303195 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=6.47 ms 2026-01-08 01:35:16.299739 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=1.62 ms 2026-01-08 01:35:17.302277 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=1.29 ms 2026-01-08 01:35:17.302355 | orchestrator | 2026-01-08 01:35:17.302367 | orchestrator | --- 192.168.112.190 ping statistics --- 2026-01-08 01:35:17.302377 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-08 01:35:17.302387 | orchestrator | rtt min/avg/max/mdev = 1.286/3.126/6.472/2.369 ms 2026-01-08 01:35:17.302396 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-01-08 01:35:20.699779 | orchestrator | 2026-01-08 01:35:20 | INFO  | Live migrating server 2361b99c-6d34-42a4-903b-11e08a07e452 2026-01-08 01:35:31.758265 | orchestrator | 2026-01-08 01:35:31 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:35:34.098258 | orchestrator | 2026-01-08 01:35:34 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:35:36.460201 | orchestrator | 2026-01-08 01:35:36 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:35:38.838844 | orchestrator | 2026-01-08 01:35:38 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:35:41.199969 | orchestrator | 2026-01-08 01:35:41 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:35:43.456750 | orchestrator | 2026-01-08 01:35:43 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:35:45.694064 | orchestrator | 2026-01-08 01:35:45 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:35:47.987819 | orchestrator | 2026-01-08 01:35:47 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:35:50.274567 | orchestrator | 2026-01-08 01:35:50 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) completed with status ACTIVE 2026-01-08 01:35:50.274632 | orchestrator | 2026-01-08 01:35:50 | INFO  | Live migrating server 61f611cc-0134-453e-bbfc-1d0713848f0b 2026-01-08 01:36:00.677311 | orchestrator | 2026-01-08 01:36:00 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:36:03.048849 | orchestrator | 2026-01-08 01:36:03 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:36:05.293139 | orchestrator | 2026-01-08 01:36:05 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:36:07.654528 | orchestrator | 2026-01-08 01:36:07 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:36:09.932064 | orchestrator | 2026-01-08 01:36:09 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:36:12.217209 | orchestrator | 2026-01-08 01:36:12 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:36:14.508376 | orchestrator | 2026-01-08 01:36:14 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:36:16.801760 | orchestrator | 2026-01-08 01:36:16 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:36:19.091729 | orchestrator | 2026-01-08 01:36:19 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) completed with status ACTIVE 2026-01-08 01:36:19.091795 | orchestrator | 2026-01-08 01:36:19 | INFO  | Live migrating server a01026d0-34d8-4455-b6d6-a5102184e753 2026-01-08 01:36:29.574352 | orchestrator | 2026-01-08 01:36:29 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:36:31.876275 | orchestrator | 2026-01-08 01:36:31 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:36:34.240709 | orchestrator | 2026-01-08 01:36:34 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:36:36.702363 | orchestrator | 2026-01-08 01:36:36 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:36:39.191401 | orchestrator | 2026-01-08 01:36:39 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:36:41.480138 | orchestrator | 2026-01-08 01:36:41 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:36:43.750965 | orchestrator | 2026-01-08 01:36:43 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:36:46.010100 | orchestrator | 2026-01-08 01:36:46 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:36:48.385148 | orchestrator | 2026-01-08 01:36:48 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) completed with status ACTIVE 2026-01-08 01:36:48.385216 | orchestrator | 2026-01-08 01:36:48 | INFO  | Live migrating server 7d17376b-3cb5-4f03-8009-32424d92adfc 2026-01-08 01:36:58.747933 | orchestrator | 2026-01-08 01:36:58 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:37:01.177885 | orchestrator | 2026-01-08 01:37:01 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:37:03.470940 | orchestrator | 2026-01-08 01:37:03 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:37:05.698714 | orchestrator | 2026-01-08 01:37:05 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:37:08.059316 | orchestrator | 2026-01-08 01:37:08 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:37:10.421770 | orchestrator | 2026-01-08 01:37:10 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:37:12.714746 | orchestrator | 2026-01-08 01:37:12 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:37:15.004108 | orchestrator | 2026-01-08 01:37:15 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:37:17.333248 | orchestrator | 2026-01-08 01:37:17 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:37:19.663494 | orchestrator | 2026-01-08 01:37:19 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) completed with status ACTIVE 2026-01-08 01:37:19.663575 | orchestrator | 2026-01-08 01:37:19 | INFO  | Live migrating server 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 2026-01-08 01:37:29.402619 | orchestrator | 2026-01-08 01:37:29 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:31.980208 | orchestrator | 2026-01-08 01:37:31 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:34.366528 | orchestrator | 2026-01-08 01:37:34 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:36.751115 | orchestrator | 2026-01-08 01:37:36 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:39.067148 | orchestrator | 2026-01-08 01:37:39 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:41.400637 | orchestrator | 2026-01-08 01:37:41 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:43.711004 | orchestrator | 2026-01-08 01:37:43 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:46.123486 | orchestrator | 2026-01-08 01:37:46 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:48.350973 | orchestrator | 2026-01-08 01:37:48 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:50.640590 | orchestrator | 2026-01-08 01:37:50 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:37:52.956521 | orchestrator | 2026-01-08 01:37:52 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) completed with status ACTIVE 2026-01-08 01:37:53.295036 | orchestrator | + compute_list 2026-01-08 01:37:53.295117 | orchestrator | + osism manage compute list testbed-node-3 2026-01-08 01:37:56.235408 | orchestrator | +------+--------+----------+ 2026-01-08 01:37:56.235479 | orchestrator | | ID | Name | Status | 2026-01-08 01:37:56.235484 | orchestrator | |------+--------+----------| 2026-01-08 01:37:56.235489 | orchestrator | +------+--------+----------+ 2026-01-08 01:37:56.575423 | orchestrator | + osism manage compute list testbed-node-4 2026-01-08 01:37:59.919119 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:37:59.919189 | orchestrator | | ID | Name | Status | 2026-01-08 01:37:59.919195 | orchestrator | |--------------------------------------+--------+----------| 2026-01-08 01:37:59.919200 | orchestrator | | 2361b99c-6d34-42a4-903b-11e08a07e452 | test-4 | ACTIVE | 2026-01-08 01:37:59.919204 | orchestrator | | 61f611cc-0134-453e-bbfc-1d0713848f0b | test-3 | ACTIVE | 2026-01-08 01:37:59.919208 | orchestrator | | a01026d0-34d8-4455-b6d6-a5102184e753 | test-2 | ACTIVE | 2026-01-08 01:37:59.919212 | orchestrator | | 7d17376b-3cb5-4f03-8009-32424d92adfc | test-1 | ACTIVE | 2026-01-08 01:37:59.919217 | orchestrator | | 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 | test | ACTIVE | 2026-01-08 01:37:59.919221 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:38:00.358853 | orchestrator | + osism manage compute list testbed-node-5 2026-01-08 01:38:03.165453 | orchestrator | +------+--------+----------+ 2026-01-08 01:38:03.165540 | orchestrator | | ID | Name | Status | 2026-01-08 01:38:03.165550 | orchestrator | |------+--------+----------| 2026-01-08 01:38:03.165557 | orchestrator | +------+--------+----------+ 2026-01-08 01:38:03.495046 | orchestrator | + server_ping 2026-01-08 01:38:03.496764 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-08 01:38:03.496820 | orchestrator | ++ tr -d '\r' 2026-01-08 01:38:06.301964 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:38:06.302055 | orchestrator | + ping -c3 192.168.112.158 2026-01-08 01:38:06.308991 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-01-08 01:38:06.309052 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=5.07 ms 2026-01-08 01:38:07.307989 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.30 ms 2026-01-08 01:38:08.310273 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=2.26 ms 2026-01-08 01:38:08.310369 | orchestrator | 2026-01-08 01:38:08.310383 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-01-08 01:38:08.310390 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:38:08.310397 | orchestrator | rtt min/avg/max/mdev = 2.264/3.212/5.069/1.313 ms 2026-01-08 01:38:08.310405 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:38:08.310412 | orchestrator | + ping -c3 192.168.112.127 2026-01-08 01:38:08.323002 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-08 01:38:08.323092 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.51 ms 2026-01-08 01:38:09.319434 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.09 ms 2026-01-08 01:38:10.320994 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.78 ms 2026-01-08 01:38:10.321062 | orchestrator | 2026-01-08 01:38:10.321078 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-08 01:38:10.321111 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:38:10.321116 | orchestrator | rtt min/avg/max/mdev = 1.779/3.793/7.510/2.631 ms 2026-01-08 01:38:10.321303 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:38:10.321317 | orchestrator | + ping -c3 192.168.112.168 2026-01-08 01:38:10.333182 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-01-08 01:38:10.333266 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=5.68 ms 2026-01-08 01:38:11.331338 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.07 ms 2026-01-08 01:38:12.333039 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.86 ms 2026-01-08 01:38:12.333101 | orchestrator | 2026-01-08 01:38:12.333107 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-01-08 01:38:12.333113 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:38:12.333119 | orchestrator | rtt min/avg/max/mdev = 1.864/3.204/5.676/1.749 ms 2026-01-08 01:38:12.333124 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:38:12.333129 | orchestrator | + ping -c3 192.168.112.196 2026-01-08 01:38:12.342360 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-01-08 01:38:12.342429 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=5.62 ms 2026-01-08 01:38:13.340246 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=1.88 ms 2026-01-08 01:38:14.341391 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=1.29 ms 2026-01-08 01:38:14.341950 | orchestrator | 2026-01-08 01:38:14.341972 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-01-08 01:38:14.341978 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:38:14.341982 | orchestrator | rtt min/avg/max/mdev = 1.289/2.928/5.620/1.918 ms 2026-01-08 01:38:14.341994 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:38:14.341999 | orchestrator | + ping -c3 192.168.112.190 2026-01-08 01:38:14.352862 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2026-01-08 01:38:14.352931 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=4.75 ms 2026-01-08 01:38:15.351653 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=1.46 ms 2026-01-08 01:38:16.353985 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=1.67 ms 2026-01-08 01:38:16.354079 | orchestrator | 2026-01-08 01:38:16.354089 | orchestrator | --- 192.168.112.190 ping statistics --- 2026-01-08 01:38:16.354108 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-08 01:38:16.354115 | orchestrator | rtt min/avg/max/mdev = 1.455/2.622/4.747/1.504 ms 2026-01-08 01:38:16.354902 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-01-08 01:38:19.501569 | orchestrator | 2026-01-08 01:38:19 | INFO  | Live migrating server 2361b99c-6d34-42a4-903b-11e08a07e452 2026-01-08 01:38:29.244639 | orchestrator | 2026-01-08 01:38:29 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:38:31.631168 | orchestrator | 2026-01-08 01:38:31 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:38:33.991377 | orchestrator | 2026-01-08 01:38:33 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:38:36.437715 | orchestrator | 2026-01-08 01:38:36 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:38:38.748800 | orchestrator | 2026-01-08 01:38:38 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:38:41.138530 | orchestrator | 2026-01-08 01:38:41 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:38:43.555524 | orchestrator | 2026-01-08 01:38:43 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:38:45.854902 | orchestrator | 2026-01-08 01:38:45 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) is still in progress 2026-01-08 01:38:48.195036 | orchestrator | 2026-01-08 01:38:48 | INFO  | Live migration of 2361b99c-6d34-42a4-903b-11e08a07e452 (test-4) completed with status ACTIVE 2026-01-08 01:38:48.195150 | orchestrator | 2026-01-08 01:38:48 | INFO  | Live migrating server 61f611cc-0134-453e-bbfc-1d0713848f0b 2026-01-08 01:38:58.973208 | orchestrator | 2026-01-08 01:38:58 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:39:01.376296 | orchestrator | 2026-01-08 01:39:01 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:39:03.644309 | orchestrator | 2026-01-08 01:39:03 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:39:06.030964 | orchestrator | 2026-01-08 01:39:06 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:39:08.395235 | orchestrator | 2026-01-08 01:39:08 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:39:10.667126 | orchestrator | 2026-01-08 01:39:10 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:39:13.038724 | orchestrator | 2026-01-08 01:39:13 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:39:15.321547 | orchestrator | 2026-01-08 01:39:15 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) is still in progress 2026-01-08 01:39:17.648441 | orchestrator | 2026-01-08 01:39:17 | INFO  | Live migration of 61f611cc-0134-453e-bbfc-1d0713848f0b (test-3) completed with status ACTIVE 2026-01-08 01:39:17.648519 | orchestrator | 2026-01-08 01:39:17 | INFO  | Live migrating server a01026d0-34d8-4455-b6d6-a5102184e753 2026-01-08 01:39:29.356769 | orchestrator | 2026-01-08 01:39:29 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:39:31.702399 | orchestrator | 2026-01-08 01:39:31 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:39:34.085226 | orchestrator | 2026-01-08 01:39:34 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:39:36.364976 | orchestrator | 2026-01-08 01:39:36 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:39:38.749722 | orchestrator | 2026-01-08 01:39:38 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:39:41.038848 | orchestrator | 2026-01-08 01:39:41 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:39:43.414382 | orchestrator | 2026-01-08 01:39:43 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:39:45.745129 | orchestrator | 2026-01-08 01:39:45 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) is still in progress 2026-01-08 01:39:48.074837 | orchestrator | 2026-01-08 01:39:48 | INFO  | Live migration of a01026d0-34d8-4455-b6d6-a5102184e753 (test-2) completed with status ACTIVE 2026-01-08 01:39:48.074928 | orchestrator | 2026-01-08 01:39:48 | INFO  | Live migrating server 7d17376b-3cb5-4f03-8009-32424d92adfc 2026-01-08 01:39:59.529179 | orchestrator | 2026-01-08 01:39:59 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:40:01.923061 | orchestrator | 2026-01-08 01:40:01 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:40:04.313088 | orchestrator | 2026-01-08 01:40:04 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:40:06.726758 | orchestrator | 2026-01-08 01:40:06 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:40:09.034448 | orchestrator | 2026-01-08 01:40:09 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:40:11.396548 | orchestrator | 2026-01-08 01:40:11 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:40:13.693978 | orchestrator | 2026-01-08 01:40:13 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:40:16.086426 | orchestrator | 2026-01-08 01:40:16 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) is still in progress 2026-01-08 01:40:18.407599 | orchestrator | 2026-01-08 01:40:18 | INFO  | Live migration of 7d17376b-3cb5-4f03-8009-32424d92adfc (test-1) completed with status ACTIVE 2026-01-08 01:40:18.407662 | orchestrator | 2026-01-08 01:40:18 | INFO  | Live migrating server 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 2026-01-08 01:40:28.667354 | orchestrator | 2026-01-08 01:40:28 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:40:31.019817 | orchestrator | 2026-01-08 01:40:31 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:40:33.376943 | orchestrator | 2026-01-08 01:40:33 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:40:35.662450 | orchestrator | 2026-01-08 01:40:35 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:40:38.035706 | orchestrator | 2026-01-08 01:40:38 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:40:40.385733 | orchestrator | 2026-01-08 01:40:40 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:40:42.671923 | orchestrator | 2026-01-08 01:40:42 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:40:44.917243 | orchestrator | 2026-01-08 01:40:44 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:40:47.282231 | orchestrator | 2026-01-08 01:40:47 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) is still in progress 2026-01-08 01:40:49.596613 | orchestrator | 2026-01-08 01:40:49 | INFO  | Live migration of 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 (test) completed with status ACTIVE 2026-01-08 01:40:49.935312 | orchestrator | + compute_list 2026-01-08 01:40:49.935384 | orchestrator | + osism manage compute list testbed-node-3 2026-01-08 01:40:52.823422 | orchestrator | +------+--------+----------+ 2026-01-08 01:40:52.823492 | orchestrator | | ID | Name | Status | 2026-01-08 01:40:52.823501 | orchestrator | |------+--------+----------| 2026-01-08 01:40:52.823507 | orchestrator | +------+--------+----------+ 2026-01-08 01:40:53.181095 | orchestrator | + osism manage compute list testbed-node-4 2026-01-08 01:40:56.067868 | orchestrator | +------+--------+----------+ 2026-01-08 01:40:56.067937 | orchestrator | | ID | Name | Status | 2026-01-08 01:40:56.067943 | orchestrator | |------+--------+----------| 2026-01-08 01:40:56.067948 | orchestrator | +------+--------+----------+ 2026-01-08 01:40:56.410614 | orchestrator | + osism manage compute list testbed-node-5 2026-01-08 01:40:59.646219 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:40:59.646321 | orchestrator | | ID | Name | Status | 2026-01-08 01:40:59.646330 | orchestrator | |--------------------------------------+--------+----------| 2026-01-08 01:40:59.646337 | orchestrator | | 2361b99c-6d34-42a4-903b-11e08a07e452 | test-4 | ACTIVE | 2026-01-08 01:40:59.646344 | orchestrator | | 61f611cc-0134-453e-bbfc-1d0713848f0b | test-3 | ACTIVE | 2026-01-08 01:40:59.646350 | orchestrator | | a01026d0-34d8-4455-b6d6-a5102184e753 | test-2 | ACTIVE | 2026-01-08 01:40:59.646357 | orchestrator | | 7d17376b-3cb5-4f03-8009-32424d92adfc | test-1 | ACTIVE | 2026-01-08 01:40:59.646363 | orchestrator | | 9a60d663-0a4f-4308-b4a0-0ca3bf5c9296 | test | ACTIVE | 2026-01-08 01:40:59.646370 | orchestrator | +--------------------------------------+--------+----------+ 2026-01-08 01:40:59.978333 | orchestrator | + server_ping 2026-01-08 01:40:59.978954 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-08 01:40:59.978972 | orchestrator | ++ tr -d '\r' 2026-01-08 01:41:02.805075 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:41:02.805177 | orchestrator | + ping -c3 192.168.112.158 2026-01-08 01:41:02.816237 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-01-08 01:41:02.816302 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=8.55 ms 2026-01-08 01:41:03.812354 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.42 ms 2026-01-08 01:41:04.814325 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=2.38 ms 2026-01-08 01:41:04.814407 | orchestrator | 2026-01-08 01:41:04.814416 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-01-08 01:41:04.814424 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-01-08 01:41:04.814430 | orchestrator | rtt min/avg/max/mdev = 2.376/4.448/8.550/2.900 ms 2026-01-08 01:41:04.815192 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:41:04.815230 | orchestrator | + ping -c3 192.168.112.127 2026-01-08 01:41:04.827553 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-01-08 01:41:04.827614 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.39 ms 2026-01-08 01:41:05.823988 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.20 ms 2026-01-08 01:41:06.826220 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.35 ms 2026-01-08 01:41:06.827168 | orchestrator | 2026-01-08 01:41:06.827214 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-01-08 01:41:06.827221 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:41:06.827227 | orchestrator | rtt min/avg/max/mdev = 2.201/3.981/7.389/2.410 ms 2026-01-08 01:41:06.827243 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:41:06.827248 | orchestrator | + ping -c3 192.168.112.168 2026-01-08 01:41:06.836404 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2026-01-08 01:41:06.836470 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=6.33 ms 2026-01-08 01:41:07.833799 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.24 ms 2026-01-08 01:41:08.836026 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=2.34 ms 2026-01-08 01:41:08.836106 | orchestrator | 2026-01-08 01:41:08.836117 | orchestrator | --- 192.168.112.168 ping statistics --- 2026-01-08 01:41:08.836126 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:41:08.836134 | orchestrator | rtt min/avg/max/mdev = 2.242/3.637/6.333/1.906 ms 2026-01-08 01:41:08.836143 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:41:08.836152 | orchestrator | + ping -c3 192.168.112.196 2026-01-08 01:41:08.848461 | orchestrator | PING 192.168.112.196 (192.168.112.196) 56(84) bytes of data. 2026-01-08 01:41:08.848538 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=1 ttl=63 time=7.24 ms 2026-01-08 01:41:09.844583 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=2 ttl=63 time=1.84 ms 2026-01-08 01:41:10.846318 | orchestrator | 64 bytes from 192.168.112.196: icmp_seq=3 ttl=63 time=2.21 ms 2026-01-08 01:41:10.846374 | orchestrator | 2026-01-08 01:41:10.846380 | orchestrator | --- 192.168.112.196 ping statistics --- 2026-01-08 01:41:10.846406 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:41:10.846411 | orchestrator | rtt min/avg/max/mdev = 1.837/3.761/7.241/2.465 ms 2026-01-08 01:41:10.846703 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-08 01:41:10.846741 | orchestrator | + ping -c3 192.168.112.190 2026-01-08 01:41:10.857892 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2026-01-08 01:41:10.857975 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=6.46 ms 2026-01-08 01:41:11.855415 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.21 ms 2026-01-08 01:41:12.857824 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=2.27 ms 2026-01-08 01:41:12.857911 | orchestrator | 2026-01-08 01:41:12.857922 | orchestrator | --- 192.168.112.190 ping statistics --- 2026-01-08 01:41:12.857932 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-08 01:41:12.857940 | orchestrator | rtt min/avg/max/mdev = 2.205/3.646/6.460/1.989 ms 2026-01-08 01:41:13.073108 | orchestrator | ok: Runtime: 0:21:12.277395 2026-01-08 01:41:13.133381 | 2026-01-08 01:41:13.133542 | TASK [Run tempest] 2026-01-08 01:41:13.902380 | orchestrator | 2026-01-08 01:41:13.902525 | orchestrator | # Tempest 2026-01-08 01:41:13.902538 | orchestrator | 2026-01-08 01:41:13.902547 | orchestrator | + set -e 2026-01-08 01:41:13.902573 | orchestrator | + echo 2026-01-08 01:41:13.902582 | orchestrator | + echo '# Tempest' 2026-01-08 01:41:13.902593 | orchestrator | + echo 2026-01-08 01:41:13.902618 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-01-08 01:41:26.133853 | orchestrator | 2026-01-08 01:41:26 | INFO  | Task d14f8256-7a40-4d2c-a5bc-2046cb28c919 (tempest) was prepared for execution. 2026-01-08 01:41:26.133942 | orchestrator | 2026-01-08 01:41:26 | INFO  | It takes a moment until task d14f8256-7a40-4d2c-a5bc-2046cb28c919 (tempest) has been started and output is visible here. 2026-01-08 01:42:44.082511 | orchestrator | 2026-01-08 01:42:44.082618 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-01-08 01:42:44.082628 | orchestrator | 2026-01-08 01:42:44.082633 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-01-08 01:42:44.082646 | orchestrator | Thursday 08 January 2026 01:41:30 +0000 (0:00:00.250) 0:00:00.250 ****** 2026-01-08 01:42:44.082650 | orchestrator | changed: [testbed-manager] 2026-01-08 01:42:44.082655 | orchestrator | 2026-01-08 01:42:44.082659 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-01-08 01:42:44.082664 | orchestrator | Thursday 08 January 2026 01:41:31 +0000 (0:00:00.725) 0:00:00.976 ****** 2026-01-08 01:42:44.082668 | orchestrator | changed: [testbed-manager] 2026-01-08 01:42:44.082672 | orchestrator | 2026-01-08 01:42:44.082684 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-01-08 01:42:44.082688 | orchestrator | Thursday 08 January 2026 01:41:32 +0000 (0:00:01.307) 0:00:02.284 ****** 2026-01-08 01:42:44.082692 | orchestrator | ok: [testbed-manager] 2026-01-08 01:42:44.082697 | orchestrator | 2026-01-08 01:42:44.082701 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-01-08 01:42:44.082705 | orchestrator | Thursday 08 January 2026 01:41:33 +0000 (0:00:00.445) 0:00:02.730 ****** 2026-01-08 01:42:44.082709 | orchestrator | changed: [testbed-manager] 2026-01-08 01:42:44.082712 | orchestrator | 2026-01-08 01:42:44.082716 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-01-08 01:42:44.082720 | orchestrator | Thursday 08 January 2026 01:41:54 +0000 (0:00:21.545) 0:00:24.275 ****** 2026-01-08 01:42:44.082725 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-01-08 01:42:44.082730 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-01-08 01:42:44.082734 | orchestrator | 2026-01-08 01:42:44.082737 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-01-08 01:42:44.082741 | orchestrator | Thursday 08 January 2026 01:42:02 +0000 (0:00:07.933) 0:00:32.208 ****** 2026-01-08 01:42:44.082745 | orchestrator | ok: [testbed-manager] => { 2026-01-08 01:42:44.082749 | orchestrator |  "changed": false, 2026-01-08 01:42:44.082753 | orchestrator |  "msg": "All assertions passed" 2026-01-08 01:42:44.082757 | orchestrator | } 2026-01-08 01:42:44.082761 | orchestrator | 2026-01-08 01:42:44.082765 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-01-08 01:42:44.082769 | orchestrator | Thursday 08 January 2026 01:42:02 +0000 (0:00:00.167) 0:00:32.376 ****** 2026-01-08 01:42:44.082773 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:42:44.082777 | orchestrator | 2026-01-08 01:42:44.082781 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-01-08 01:42:44.082784 | orchestrator | Thursday 08 January 2026 01:42:06 +0000 (0:00:03.691) 0:00:36.067 ****** 2026-01-08 01:42:44.082788 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:42:44.082792 | orchestrator | 2026-01-08 01:42:44.082796 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-01-08 01:42:44.082800 | orchestrator | Thursday 08 January 2026 01:42:08 +0000 (0:00:01.803) 0:00:37.870 ****** 2026-01-08 01:42:44.082804 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:42:44.082808 | orchestrator | 2026-01-08 01:42:44.082812 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-01-08 01:42:44.082833 | orchestrator | Thursday 08 January 2026 01:42:11 +0000 (0:00:03.586) 0:00:41.457 ****** 2026-01-08 01:42:44.082837 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:42:44.082841 | orchestrator | 2026-01-08 01:42:44.082845 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-01-08 01:42:44.082849 | orchestrator | Thursday 08 January 2026 01:42:12 +0000 (0:00:00.224) 0:00:41.681 ****** 2026-01-08 01:42:44.082853 | orchestrator | changed: [testbed-manager] 2026-01-08 01:42:44.082857 | orchestrator | 2026-01-08 01:42:44.082861 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-01-08 01:42:44.082864 | orchestrator | Thursday 08 January 2026 01:42:14 +0000 (0:00:02.392) 0:00:44.074 ****** 2026-01-08 01:42:44.082868 | orchestrator | changed: [testbed-manager] 2026-01-08 01:42:44.082872 | orchestrator | 2026-01-08 01:42:44.082876 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-01-08 01:42:44.082880 | orchestrator | Thursday 08 January 2026 01:42:24 +0000 (0:00:09.929) 0:00:54.004 ****** 2026-01-08 01:42:44.082884 | orchestrator | changed: [testbed-manager] 2026-01-08 01:42:44.082887 | orchestrator | 2026-01-08 01:42:44.082891 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-01-08 01:42:44.082895 | orchestrator | Thursday 08 January 2026 01:42:25 +0000 (0:00:00.779) 0:00:54.783 ****** 2026-01-08 01:42:44.082899 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:42:44.082903 | orchestrator | 2026-01-08 01:42:44.082906 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-01-08 01:42:44.082910 | orchestrator | Thursday 08 January 2026 01:42:26 +0000 (0:00:01.542) 0:00:56.326 ****** 2026-01-08 01:42:44.082914 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:42:44.082918 | orchestrator | 2026-01-08 01:42:44.082922 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-01-08 01:42:44.082925 | orchestrator | Thursday 08 January 2026 01:42:28 +0000 (0:00:01.581) 0:00:57.908 ****** 2026-01-08 01:42:44.082929 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:42:44.082959 | orchestrator | 2026-01-08 01:42:44.082964 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-01-08 01:42:44.082968 | orchestrator | Thursday 08 January 2026 01:42:28 +0000 (0:00:00.194) 0:00:58.103 ****** 2026-01-08 01:42:44.082972 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:42:44.082976 | orchestrator | 2026-01-08 01:42:44.082979 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-01-08 01:42:44.082983 | orchestrator | Thursday 08 January 2026 01:42:28 +0000 (0:00:00.189) 0:00:58.292 ****** 2026-01-08 01:42:44.082987 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-08 01:42:44.082991 | orchestrator | 2026-01-08 01:42:44.082995 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-01-08 01:42:44.083011 | orchestrator | Thursday 08 January 2026 01:42:32 +0000 (0:00:03.863) 0:01:02.155 ****** 2026-01-08 01:42:44.083016 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-01-08 01:42:44.083020 | orchestrator |  "changed": false, 2026-01-08 01:42:44.083024 | orchestrator |  "msg": "All assertions passed" 2026-01-08 01:42:44.083028 | orchestrator | } 2026-01-08 01:42:44.083031 | orchestrator | 2026-01-08 01:42:44.083035 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-01-08 01:42:44.083039 | orchestrator | Thursday 08 January 2026 01:42:32 +0000 (0:00:00.209) 0:01:02.365 ****** 2026-01-08 01:42:44.083043 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-01-08 01:42:44.083051 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-01-08 01:42:44.083055 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:42:44.083066 | orchestrator | 2026-01-08 01:42:44.083070 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-01-08 01:42:44.083074 | orchestrator | Thursday 08 January 2026 01:42:33 +0000 (0:00:00.442) 0:01:02.807 ****** 2026-01-08 01:42:44.083083 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:42:44.083087 | orchestrator | 2026-01-08 01:42:44.083091 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-01-08 01:42:44.083094 | orchestrator | Thursday 08 January 2026 01:42:33 +0000 (0:00:00.160) 0:01:02.968 ****** 2026-01-08 01:42:44.083098 | orchestrator | ok: [testbed-manager] 2026-01-08 01:42:44.083102 | orchestrator | 2026-01-08 01:42:44.083106 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-01-08 01:42:44.083110 | orchestrator | Thursday 08 January 2026 01:42:33 +0000 (0:00:00.496) 0:01:03.464 ****** 2026-01-08 01:42:44.083114 | orchestrator | changed: [testbed-manager] 2026-01-08 01:42:44.083117 | orchestrator | 2026-01-08 01:42:44.083121 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-01-08 01:42:44.083125 | orchestrator | Thursday 08 January 2026 01:42:34 +0000 (0:00:00.969) 0:01:04.434 ****** 2026-01-08 01:42:44.083129 | orchestrator | ok: [testbed-manager] 2026-01-08 01:42:44.083139 | orchestrator | 2026-01-08 01:42:44.083145 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-01-08 01:42:44.083150 | orchestrator | Thursday 08 January 2026 01:42:35 +0000 (0:00:00.426) 0:01:04.861 ****** 2026-01-08 01:42:44.083156 | orchestrator | skipping: [testbed-manager] 2026-01-08 01:42:44.083164 | orchestrator | 2026-01-08 01:42:44.083172 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-01-08 01:42:44.083179 | orchestrator | Thursday 08 January 2026 01:42:35 +0000 (0:00:00.161) 0:01:05.022 ****** 2026-01-08 01:42:44.083185 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-01-08 01:42:44.083191 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-01-08 01:42:44.083197 | orchestrator | 2026-01-08 01:42:44.083203 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-01-08 01:42:44.083209 | orchestrator | Thursday 08 January 2026 01:42:42 +0000 (0:00:07.622) 0:01:12.645 ****** 2026-01-08 01:42:44.083215 | orchestrator | changed: [testbed-manager] 2026-01-08 01:42:44.083220 | orchestrator | 2026-01-08 01:42:44.083226 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-08 01:42:44.083233 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-08 01:42:44.083240 | orchestrator | 2026-01-08 01:42:44.083246 | orchestrator | 2026-01-08 01:42:44.083252 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-08 01:42:44.083257 | orchestrator | Thursday 08 January 2026 01:42:44 +0000 (0:00:01.053) 0:01:13.698 ****** 2026-01-08 01:42:44.083264 | orchestrator | =============================================================================== 2026-01-08 01:42:44.083270 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 21.55s 2026-01-08 01:42:44.083276 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 9.93s 2026-01-08 01:42:44.083284 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 7.93s 2026-01-08 01:42:44.083288 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.62s 2026-01-08 01:42:44.083291 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.86s 2026-01-08 01:42:44.083295 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.69s 2026-01-08 01:42:44.083299 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.59s 2026-01-08 01:42:44.083303 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.39s 2026-01-08 01:42:44.083307 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.80s 2026-01-08 01:42:44.083311 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.58s 2026-01-08 01:42:44.083315 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.54s 2026-01-08 01:42:44.083323 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.31s 2026-01-08 01:42:44.083327 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.05s 2026-01-08 01:42:44.083331 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.97s 2026-01-08 01:42:44.083339 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.78s 2026-01-08 01:42:44.083343 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.73s 2026-01-08 01:42:44.083347 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.50s 2026-01-08 01:42:44.083355 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.45s 2026-01-08 01:42:44.481994 | orchestrator | osism.validations.tempest : Resolve flavor IDs -------------------------- 0.44s 2026-01-08 01:42:44.482094 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.43s 2026-01-08 01:42:44.891876 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-01-08 01:42:44.896064 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-01-08 01:42:44.899453 | orchestrator | 2026-01-08 01:42:44.899519 | orchestrator | ## IDENTITY (API) 2026-01-08 01:42:44.899531 | orchestrator | 2026-01-08 01:42:44.899536 | orchestrator | + echo 2026-01-08 01:42:44.899541 | orchestrator | + echo '## IDENTITY (API)' 2026-01-08 01:42:44.899545 | orchestrator | + echo 2026-01-08 01:42:44.899550 | orchestrator | + _tempest tempest.api.identity.v3 2026-01-08 01:42:44.899555 | orchestrator | + local regex=tempest.api.identity.v3 2026-01-08 01:42:44.900693 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-01-08 01:42:44.901215 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-08 01:42:44.902544 | orchestrator | + tee -a /opt/tempest/20260108-0142.log 2026-01-08 01:42:49.143598 | orchestrator | 2026-01-08 01:42:49.143 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-08 01:42:49.238813 | orchestrator | 2026-01-08 01:42:49.238 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:42:49.238912 | orchestrator | 2026-01-08 01:42:49.239 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:42:49.238923 | orchestrator | 2026-01-08 01:42:49.239 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:42:49.238930 | orchestrator | 2026-01-08 01:42:49.239 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:42:49.239066 | orchestrator | 2026-01-08 01:42:49.240 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:42:49.239079 | orchestrator | 2026-01-08 01:42:49.240 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:42:49.239519 | orchestrator | 2026-01-08 01:42:49.240 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:42:49.239562 | orchestrator | 2026-01-08 01:42:49.240 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:42:49.239630 | orchestrator | 2026-01-08 01:42:49.240 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:42:49.240627 | orchestrator | 2026-01-08 01:42:49.241 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:42:49.240700 | orchestrator | 2026-01-08 01:42:49.241 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:42:49.241172 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:42:49.241204 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:42:49.241208 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:42:49.241213 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:42:49.241372 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:42:49.241385 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:42:49.241549 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:42:49.241572 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:42:49.241577 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:42:49.241581 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:42:49.241585 | orchestrator | 2026-01-08 01:42:49.242 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:43:03.131648 | orchestrator | 2026-01-08 01:43:03.131745 | orchestrator | ========================= 2026-01-08 01:43:03.131759 | orchestrator | Failures during discovery 2026-01-08 01:43:03.131769 | orchestrator | ========================= 2026-01-08 01:43:03.131779 | orchestrator | --- stdout --- 2026-01-08 01:43:03.131789 | orchestrator | 2026-01-08 01:42:52.806 11 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-08 01:43:03.131800 | orchestrator | 2026-01-08 01:42:52.808 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:43:03.131811 | orchestrator | 2026-01-08 01:42:52.808 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:43:03.131820 | orchestrator | 2026-01-08 01:42:52.808 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:43:03.131829 | orchestrator | 2026-01-08 01:42:52.808 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:03.131838 | orchestrator | 2026-01-08 01:42:52.809 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:43:03.131848 | orchestrator | 2026-01-08 01:42:52.809 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:43:03.131859 | orchestrator | 2026-01-08 01:42:52.809 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:43:03.131868 | orchestrator | 2026-01-08 01:42:52.809 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:43:03.131877 | orchestrator | 2026-01-08 01:42:52.809 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:43:03.131886 | orchestrator | 2026-01-08 01:42:52.810 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:43:03.131895 | orchestrator | 2026-01-08 01:42:52.810 11 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:43:03.131904 | orchestrator | 2026-01-08 01:42:52.810 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:43:03.131940 | orchestrator | 2026-01-08 01:42:52.810 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:43:03.131949 | orchestrator | 2026-01-08 01:42:52.811 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:43:03.131958 | orchestrator | 2026-01-08 01:42:52.811 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:03.131968 | orchestrator | 2026-01-08 01:42:52.811 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:43:03.131977 | orchestrator | 2026-01-08 01:42:52.811 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:43:03.132034 | orchestrator | 2026-01-08 01:42:52.811 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:43:03.132045 | orchestrator | 2026-01-08 01:42:52.811 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:43:03.132067 | orchestrator | 2026-01-08 01:42:52.811 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:43:03.132076 | orchestrator | 2026-01-08 01:42:52.811 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:43:03.132085 | orchestrator | 2026-01-08 01:42:52.811 11 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:43:03.132103 | orchestrator | 2026-01-08 01:42:52.813 11 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-08 01:43:03.132127 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-08 01:43:03.132142 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-08 01:43:03.132156 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-08 01:43:03.132171 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:03.132205 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-08 01:43:03.132220 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-08 01:43:03.132234 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-08 01:43:03.132248 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-08 01:43:03.132263 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-08 01:43:03.132278 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-08 01:43:03.132294 | orchestrator | 2026-01-08 01:42:53.653 11 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-08 01:43:03.132309 | orchestrator | --- import errors --- 2026-01-08 01:43:03.132325 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-08 01:43:03.132341 | orchestrator | Traceback (most recent call last): 2026-01-08 01:43:03.132357 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-08 01:43:03.132372 | orchestrator | module = self._get_module_from_name(name) 2026-01-08 01:43:03.132387 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-08 01:43:03.132414 | orchestrator | __import__(name) 2026-01-08 01:43:03.132428 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-08 01:43:03.132444 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-08 01:43:03.132459 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-08 01:43:03.132474 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-08 01:43:03.132491 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-08 01:43:03.132506 | orchestrator | 2026-01-08 01:43:03.132521 | orchestrator | ================================================================================ 2026-01-08 01:43:03.132536 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-08 01:43:03.570980 | orchestrator | 2026-01-08 01:43:03.571118 | orchestrator | ## IMAGE (API) 2026-01-08 01:43:03.571127 | orchestrator | 2026-01-08 01:43:03.571134 | orchestrator | + echo 2026-01-08 01:43:03.571140 | orchestrator | + echo '## IMAGE (API)' 2026-01-08 01:43:03.571151 | orchestrator | + echo 2026-01-08 01:43:03.571158 | orchestrator | + _tempest tempest.api.image.v2 2026-01-08 01:43:03.571164 | orchestrator | + local regex=tempest.api.image.v2 2026-01-08 01:43:03.571375 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-01-08 01:43:03.572253 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-08 01:43:03.574009 | orchestrator | + tee -a /opt/tempest/20260108-0143.log 2026-01-08 01:43:07.421336 | orchestrator | 2026-01-08 01:43:07.421 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-08 01:43:07.518077 | orchestrator | 2026-01-08 01:43:07.517 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:43:07.518161 | orchestrator | 2026-01-08 01:43:07.518 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:43:07.518173 | orchestrator | 2026-01-08 01:43:07.518 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:43:07.518203 | orchestrator | 2026-01-08 01:43:07.518 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:07.518801 | orchestrator | 2026-01-08 01:43:07.519 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:43:07.519082 | orchestrator | 2026-01-08 01:43:07.520 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:43:07.519405 | orchestrator | 2026-01-08 01:43:07.520 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:43:07.520013 | orchestrator | 2026-01-08 01:43:07.520 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:43:07.520057 | orchestrator | 2026-01-08 01:43:07.521 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:43:07.520961 | orchestrator | 2026-01-08 01:43:07.521 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:43:07.521040 | orchestrator | 2026-01-08 01:43:07.522 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:43:07.521821 | orchestrator | 2026-01-08 01:43:07.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:43:07.521859 | orchestrator | 2026-01-08 01:43:07.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:43:07.521866 | orchestrator | 2026-01-08 01:43:07.522 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:43:07.521923 | orchestrator | 2026-01-08 01:43:07.523 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:07.521931 | orchestrator | 2026-01-08 01:43:07.523 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:43:07.521937 | orchestrator | 2026-01-08 01:43:07.523 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:43:07.522162 | orchestrator | 2026-01-08 01:43:07.523 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:43:07.522222 | orchestrator | 2026-01-08 01:43:07.523 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:43:07.522228 | orchestrator | 2026-01-08 01:43:07.523 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:43:07.522470 | orchestrator | 2026-01-08 01:43:07.523 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:43:07.522483 | orchestrator | 2026-01-08 01:43:07.523 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:43:20.991218 | orchestrator | 2026-01-08 01:43:20.991298 | orchestrator | ========================= 2026-01-08 01:43:20.991307 | orchestrator | Failures during discovery 2026-01-08 01:43:20.991312 | orchestrator | ========================= 2026-01-08 01:43:20.991317 | orchestrator | --- stdout --- 2026-01-08 01:43:20.991323 | orchestrator | 2026-01-08 01:43:11.025 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-08 01:43:20.991331 | orchestrator | 2026-01-08 01:43:11.026 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:43:20.991337 | orchestrator | 2026-01-08 01:43:11.027 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:43:20.991342 | orchestrator | 2026-01-08 01:43:11.027 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:43:20.991347 | orchestrator | 2026-01-08 01:43:11.027 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:20.991353 | orchestrator | 2026-01-08 01:43:11.027 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:43:20.991360 | orchestrator | 2026-01-08 01:43:11.027 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:43:20.991367 | orchestrator | 2026-01-08 01:43:11.028 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:43:20.991378 | orchestrator | 2026-01-08 01:43:11.028 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:43:20.991386 | orchestrator | 2026-01-08 01:43:11.028 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:43:20.991392 | orchestrator | 2026-01-08 01:43:11.028 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:43:20.991399 | orchestrator | 2026-01-08 01:43:11.028 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:43:20.991405 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:43:20.991412 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:43:20.991420 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:43:20.991450 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:20.991459 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:43:20.991465 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:43:20.991472 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:43:20.991479 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:43:20.991486 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:43:20.991492 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:43:20.991516 | orchestrator | 2026-01-08 01:43:11.029 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:43:20.991528 | orchestrator | 2026-01-08 01:43:11.032 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-08 01:43:20.991537 | orchestrator | 2026-01-08 01:43:11.806 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-08 01:43:20.991545 | orchestrator | 2026-01-08 01:43:11.806 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-08 01:43:20.991552 | orchestrator | 2026-01-08 01:43:11.806 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-08 01:43:20.991559 | orchestrator | 2026-01-08 01:43:11.806 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:20.991582 | orchestrator | 2026-01-08 01:43:11.807 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-08 01:43:20.991589 | orchestrator | 2026-01-08 01:43:11.807 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-08 01:43:20.991597 | orchestrator | 2026-01-08 01:43:11.807 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-08 01:43:20.991604 | orchestrator | 2026-01-08 01:43:11.807 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-08 01:43:20.991610 | orchestrator | 2026-01-08 01:43:11.807 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-08 01:43:20.991618 | orchestrator | 2026-01-08 01:43:11.807 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-08 01:43:20.991625 | orchestrator | 2026-01-08 01:43:11.807 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-08 01:43:20.991631 | orchestrator | --- import errors --- 2026-01-08 01:43:20.991639 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-08 01:43:20.991667 | orchestrator | Traceback (most recent call last): 2026-01-08 01:43:20.991673 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-08 01:43:20.991677 | orchestrator | module = self._get_module_from_name(name) 2026-01-08 01:43:20.991682 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-08 01:43:20.991686 | orchestrator | __import__(name) 2026-01-08 01:43:20.991691 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-08 01:43:20.991695 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-08 01:43:20.991700 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-08 01:43:20.991712 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-08 01:43:20.991716 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-08 01:43:20.991721 | orchestrator | 2026-01-08 01:43:20.991725 | orchestrator | ================================================================================ 2026-01-08 01:43:20.991730 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-08 01:43:21.454322 | orchestrator | 2026-01-08 01:43:21.454390 | orchestrator | ## NETWORK (API) 2026-01-08 01:43:21.454398 | orchestrator | 2026-01-08 01:43:21.454405 | orchestrator | + echo 2026-01-08 01:43:21.454412 | orchestrator | + echo '## NETWORK (API)' 2026-01-08 01:43:21.454419 | orchestrator | + echo 2026-01-08 01:43:21.454425 | orchestrator | + _tempest tempest.api.network 2026-01-08 01:43:21.454431 | orchestrator | + local regex=tempest.api.network 2026-01-08 01:43:21.455280 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-01-08 01:43:21.457451 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-08 01:43:21.459838 | orchestrator | + tee -a /opt/tempest/20260108-0143.log 2026-01-08 01:43:25.249614 | orchestrator | 2026-01-08 01:43:25.249 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-08 01:43:25.361366 | orchestrator | 2026-01-08 01:43:25.360 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:43:25.361450 | orchestrator | 2026-01-08 01:43:25.360 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:43:25.361457 | orchestrator | 2026-01-08 01:43:25.360 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:43:25.361463 | orchestrator | 2026-01-08 01:43:25.360 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:25.361468 | orchestrator | 2026-01-08 01:43:25.361 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:43:25.361472 | orchestrator | 2026-01-08 01:43:25.361 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:43:25.361476 | orchestrator | 2026-01-08 01:43:25.361 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:43:25.361480 | orchestrator | 2026-01-08 01:43:25.361 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:43:25.361506 | orchestrator | 2026-01-08 01:43:25.361 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:43:25.362527 | orchestrator | 2026-01-08 01:43:25.362 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:43:25.362581 | orchestrator | 2026-01-08 01:43:25.363 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:43:25.362588 | orchestrator | 2026-01-08 01:43:25.363 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:43:25.362772 | orchestrator | 2026-01-08 01:43:25.363 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:43:25.362781 | orchestrator | 2026-01-08 01:43:25.363 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:43:25.362785 | orchestrator | 2026-01-08 01:43:25.363 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:25.362791 | orchestrator | 2026-01-08 01:43:25.363 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:43:25.362814 | orchestrator | 2026-01-08 01:43:25.364 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:43:25.365147 | orchestrator | 2026-01-08 01:43:25.364 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:43:25.365264 | orchestrator | 2026-01-08 01:43:25.364 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:43:25.365279 | orchestrator | 2026-01-08 01:43:25.364 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:43:25.365285 | orchestrator | 2026-01-08 01:43:25.364 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:43:25.365291 | orchestrator | 2026-01-08 01:43:25.364 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:43:39.188632 | orchestrator | 2026-01-08 01:43:39.188733 | orchestrator | ========================= 2026-01-08 01:43:39.188745 | orchestrator | Failures during discovery 2026-01-08 01:43:39.188753 | orchestrator | ========================= 2026-01-08 01:43:39.188760 | orchestrator | --- stdout --- 2026-01-08 01:43:39.188769 | orchestrator | 2026-01-08 01:43:28.866 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-08 01:43:39.188778 | orchestrator | 2026-01-08 01:43:28.867 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:43:39.188787 | orchestrator | 2026-01-08 01:43:28.867 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:43:39.188793 | orchestrator | 2026-01-08 01:43:28.867 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:43:39.188821 | orchestrator | 2026-01-08 01:43:28.868 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:39.188829 | orchestrator | 2026-01-08 01:43:28.868 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:43:39.188836 | orchestrator | 2026-01-08 01:43:28.868 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:43:39.188844 | orchestrator | 2026-01-08 01:43:28.868 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:43:39.188851 | orchestrator | 2026-01-08 01:43:28.868 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:43:39.188858 | orchestrator | 2026-01-08 01:43:28.869 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:43:39.188865 | orchestrator | 2026-01-08 01:43:28.869 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:43:39.188872 | orchestrator | 2026-01-08 01:43:28.869 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:43:39.188879 | orchestrator | 2026-01-08 01:43:28.869 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:43:39.188885 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:43:39.188892 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:43:39.188898 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:39.188905 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:43:39.188932 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:43:39.188940 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:43:39.188946 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:43:39.188953 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:43:39.188960 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:43:39.188966 | orchestrator | 2026-01-08 01:43:28.870 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:43:39.188975 | orchestrator | 2026-01-08 01:43:28.873 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-08 01:43:39.188984 | orchestrator | 2026-01-08 01:43:29.674 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-08 01:43:39.188991 | orchestrator | 2026-01-08 01:43:29.674 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-08 01:43:39.188998 | orchestrator | 2026-01-08 01:43:29.674 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-08 01:43:39.189005 | orchestrator | 2026-01-08 01:43:29.674 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:39.189027 | orchestrator | 2026-01-08 01:43:29.674 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-08 01:43:39.189034 | orchestrator | 2026-01-08 01:43:29.674 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-08 01:43:39.189041 | orchestrator | 2026-01-08 01:43:29.674 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-08 01:43:39.189047 | orchestrator | 2026-01-08 01:43:29.674 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-08 01:43:39.189054 | orchestrator | 2026-01-08 01:43:29.675 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-08 01:43:39.189061 | orchestrator | 2026-01-08 01:43:29.675 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-08 01:43:39.189072 | orchestrator | 2026-01-08 01:43:29.675 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-08 01:43:39.189103 | orchestrator | --- import errors --- 2026-01-08 01:43:39.189111 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-08 01:43:39.189117 | orchestrator | Traceback (most recent call last): 2026-01-08 01:43:39.189126 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-08 01:43:39.189132 | orchestrator | module = self._get_module_from_name(name) 2026-01-08 01:43:39.189139 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-08 01:43:39.189146 | orchestrator | __import__(name) 2026-01-08 01:43:39.189152 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-08 01:43:39.189159 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-08 01:43:39.189165 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-08 01:43:39.189172 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-08 01:43:39.189178 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-08 01:43:39.189185 | orchestrator | 2026-01-08 01:43:39.189191 | orchestrator | ================================================================================ 2026-01-08 01:43:39.189203 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-08 01:43:39.653725 | orchestrator | 2026-01-08 01:43:39.653809 | orchestrator | ## VOLUME (API) 2026-01-08 01:43:39.653821 | orchestrator | 2026-01-08 01:43:39.653829 | orchestrator | + echo 2026-01-08 01:43:39.653837 | orchestrator | + echo '## VOLUME (API)' 2026-01-08 01:43:39.653846 | orchestrator | + echo 2026-01-08 01:43:39.653853 | orchestrator | + _tempest tempest.api.volume 2026-01-08 01:43:39.653861 | orchestrator | + local regex=tempest.api.volume 2026-01-08 01:43:39.654120 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-01-08 01:43:39.654252 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-08 01:43:39.656771 | orchestrator | + tee -a /opt/tempest/20260108-0143.log 2026-01-08 01:43:43.539606 | orchestrator | 2026-01-08 01:43:43.539 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-08 01:43:43.638842 | orchestrator | 2026-01-08 01:43:43.638 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:43:43.638923 | orchestrator | 2026-01-08 01:43:43.639 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:43:43.638933 | orchestrator | 2026-01-08 01:43:43.639 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:43:43.638987 | orchestrator | 2026-01-08 01:43:43.639 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:43.638997 | orchestrator | 2026-01-08 01:43:43.640 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:43:43.639004 | orchestrator | 2026-01-08 01:43:43.640 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:43:43.639116 | orchestrator | 2026-01-08 01:43:43.640 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:43:43.639433 | orchestrator | 2026-01-08 01:43:43.640 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:43:43.639456 | orchestrator | 2026-01-08 01:43:43.640 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:43:43.640386 | orchestrator | 2026-01-08 01:43:43.641 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:43:43.640423 | orchestrator | 2026-01-08 01:43:43.641 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:43:43.640633 | orchestrator | 2026-01-08 01:43:43.641 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:43:43.640890 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:43:43.640904 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:43:43.640911 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:43.640921 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:43:43.641371 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:43:43.641425 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:43:43.641449 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:43:43.641453 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:43:43.641457 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:43:43.641472 | orchestrator | 2026-01-08 01:43:43.642 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:43:57.442326 | orchestrator | 2026-01-08 01:43:57.442455 | orchestrator | ========================= 2026-01-08 01:43:57.442478 | orchestrator | Failures during discovery 2026-01-08 01:43:57.442493 | orchestrator | ========================= 2026-01-08 01:43:57.442510 | orchestrator | --- stdout --- 2026-01-08 01:43:57.442528 | orchestrator | 2026-01-08 01:43:47.222 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-08 01:43:57.442545 | orchestrator | 2026-01-08 01:43:47.223 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:43:57.442562 | orchestrator | 2026-01-08 01:43:47.223 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:43:57.442577 | orchestrator | 2026-01-08 01:43:47.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:43:57.442653 | orchestrator | 2026-01-08 01:43:47.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:57.442671 | orchestrator | 2026-01-08 01:43:47.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:43:57.442687 | orchestrator | 2026-01-08 01:43:47.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:43:57.442702 | orchestrator | 2026-01-08 01:43:47.224 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:43:57.442716 | orchestrator | 2026-01-08 01:43:47.225 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:43:57.442730 | orchestrator | 2026-01-08 01:43:47.225 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:43:57.442746 | orchestrator | 2026-01-08 01:43:47.225 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:43:57.442760 | orchestrator | 2026-01-08 01:43:47.225 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:43:57.442775 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:43:57.442790 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:43:57.442806 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:43:57.442823 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:57.442840 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:43:57.442855 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:43:57.442871 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:43:57.442918 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:43:57.442934 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:43:57.442950 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:43:57.442965 | orchestrator | 2026-01-08 01:43:47.226 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:43:57.442982 | orchestrator | 2026-01-08 01:43:47.229 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-08 01:43:57.443002 | orchestrator | 2026-01-08 01:43:48.040 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-08 01:43:57.443018 | orchestrator | 2026-01-08 01:43:48.040 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-08 01:43:57.443035 | orchestrator | 2026-01-08 01:43:48.041 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-08 01:43:57.443050 | orchestrator | 2026-01-08 01:43:48.041 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:43:57.443088 | orchestrator | 2026-01-08 01:43:48.041 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-08 01:43:57.443104 | orchestrator | 2026-01-08 01:43:48.041 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-08 01:43:57.443119 | orchestrator | 2026-01-08 01:43:48.041 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-08 01:43:57.443160 | orchestrator | 2026-01-08 01:43:48.041 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-08 01:43:57.443175 | orchestrator | 2026-01-08 01:43:48.041 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-08 01:43:57.443190 | orchestrator | 2026-01-08 01:43:48.041 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-08 01:43:57.443204 | orchestrator | 2026-01-08 01:43:48.041 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-08 01:43:57.443218 | orchestrator | --- import errors --- 2026-01-08 01:43:57.443234 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-08 01:43:57.443251 | orchestrator | Traceback (most recent call last): 2026-01-08 01:43:57.443268 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-08 01:43:57.443284 | orchestrator | module = self._get_module_from_name(name) 2026-01-08 01:43:57.443318 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-08 01:43:57.443333 | orchestrator | __import__(name) 2026-01-08 01:43:57.443347 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-08 01:43:57.443367 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-08 01:43:57.443382 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-08 01:43:57.443396 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-08 01:43:57.443410 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-08 01:43:57.443425 | orchestrator | 2026-01-08 01:43:57.443446 | orchestrator | ================================================================================ 2026-01-08 01:43:57.443460 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-08 01:43:57.860095 | orchestrator | 2026-01-08 01:43:57.860213 | orchestrator | ## COMPUTE (API) 2026-01-08 01:43:57.860226 | orchestrator | 2026-01-08 01:43:57.860237 | orchestrator | + echo 2026-01-08 01:43:57.860247 | orchestrator | + echo '## COMPUTE (API)' 2026-01-08 01:43:57.860284 | orchestrator | + echo 2026-01-08 01:43:57.861164 | orchestrator | + _tempest tempest.api.compute 2026-01-08 01:43:57.861205 | orchestrator | + local regex=tempest.api.compute 2026-01-08 01:43:57.861220 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-01-08 01:43:57.861232 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-08 01:43:57.861787 | orchestrator | + tee -a /opt/tempest/20260108-0143.log 2026-01-08 01:44:01.703589 | orchestrator | 2026-01-08 01:44:01.703 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-08 01:44:01.799657 | orchestrator | 2026-01-08 01:44:01.799 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:44:01.799728 | orchestrator | 2026-01-08 01:44:01.799 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:44:01.799737 | orchestrator | 2026-01-08 01:44:01.799 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:44:01.799744 | orchestrator | 2026-01-08 01:44:01.800 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:01.799752 | orchestrator | 2026-01-08 01:44:01.800 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:44:01.799758 | orchestrator | 2026-01-08 01:44:01.800 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:44:01.799765 | orchestrator | 2026-01-08 01:44:01.800 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:44:01.799866 | orchestrator | 2026-01-08 01:44:01.800 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:44:01.800456 | orchestrator | 2026-01-08 01:44:01.801 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:44:01.801320 | orchestrator | 2026-01-08 01:44:01.801 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:44:01.801348 | orchestrator | 2026-01-08 01:44:01.801 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:44:01.803738 | orchestrator | 2026-01-08 01:44:01.802 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:44:01.803794 | orchestrator | 2026-01-08 01:44:01.802 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:44:01.803806 | orchestrator | 2026-01-08 01:44:01.802 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:44:01.804116 | orchestrator | 2026-01-08 01:44:01.802 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:01.804161 | orchestrator | 2026-01-08 01:44:01.802 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:44:01.804169 | orchestrator | 2026-01-08 01:44:01.802 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:44:01.804175 | orchestrator | 2026-01-08 01:44:01.802 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:44:01.804182 | orchestrator | 2026-01-08 01:44:01.802 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:44:01.804189 | orchestrator | 2026-01-08 01:44:01.802 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:44:01.804217 | orchestrator | 2026-01-08 01:44:01.803 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:44:01.804224 | orchestrator | 2026-01-08 01:44:01.803 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:44:15.118622 | orchestrator | 2026-01-08 01:44:15.118717 | orchestrator | ========================= 2026-01-08 01:44:15.118730 | orchestrator | Failures during discovery 2026-01-08 01:44:15.118737 | orchestrator | ========================= 2026-01-08 01:44:15.118743 | orchestrator | --- stdout --- 2026-01-08 01:44:15.118752 | orchestrator | 2026-01-08 01:44:05.387 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-08 01:44:15.118761 | orchestrator | 2026-01-08 01:44:05.389 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:44:15.118769 | orchestrator | 2026-01-08 01:44:05.389 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:44:15.118777 | orchestrator | 2026-01-08 01:44:05.389 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:44:15.118784 | orchestrator | 2026-01-08 01:44:05.389 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:15.118792 | orchestrator | 2026-01-08 01:44:05.390 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:44:15.118799 | orchestrator | 2026-01-08 01:44:05.390 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:44:15.118806 | orchestrator | 2026-01-08 01:44:05.390 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:44:15.118813 | orchestrator | 2026-01-08 01:44:05.390 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:44:15.118820 | orchestrator | 2026-01-08 01:44:05.390 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:44:15.118843 | orchestrator | 2026-01-08 01:44:05.391 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:44:15.118851 | orchestrator | 2026-01-08 01:44:05.391 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:44:15.118858 | orchestrator | 2026-01-08 01:44:05.391 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:44:15.118865 | orchestrator | 2026-01-08 01:44:05.391 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:44:15.118872 | orchestrator | 2026-01-08 01:44:05.392 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:44:15.118878 | orchestrator | 2026-01-08 01:44:05.392 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:15.118887 | orchestrator | 2026-01-08 01:44:05.392 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:44:15.118894 | orchestrator | 2026-01-08 01:44:05.392 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:44:15.118900 | orchestrator | 2026-01-08 01:44:05.392 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:44:15.118907 | orchestrator | 2026-01-08 01:44:05.392 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:44:15.118914 | orchestrator | 2026-01-08 01:44:05.392 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:44:15.118921 | orchestrator | 2026-01-08 01:44:05.392 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:44:15.118946 | orchestrator | 2026-01-08 01:44:05.392 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:44:15.118956 | orchestrator | 2026-01-08 01:44:05.395 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-08 01:44:15.118964 | orchestrator | 2026-01-08 01:44:06.253 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-08 01:44:15.118970 | orchestrator | 2026-01-08 01:44:06.254 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-08 01:44:15.118978 | orchestrator | 2026-01-08 01:44:06.254 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-08 01:44:15.118984 | orchestrator | 2026-01-08 01:44:06.254 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:15.119007 | orchestrator | 2026-01-08 01:44:06.254 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-08 01:44:15.119014 | orchestrator | 2026-01-08 01:44:06.254 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-08 01:44:15.119021 | orchestrator | 2026-01-08 01:44:06.254 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-08 01:44:15.119027 | orchestrator | 2026-01-08 01:44:06.254 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-08 01:44:15.119033 | orchestrator | 2026-01-08 01:44:06.255 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-08 01:44:15.119039 | orchestrator | 2026-01-08 01:44:06.255 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-08 01:44:15.119045 | orchestrator | 2026-01-08 01:44:06.255 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-08 01:44:15.119052 | orchestrator | --- import errors --- 2026-01-08 01:44:15.119059 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-08 01:44:15.119065 | orchestrator | Traceback (most recent call last): 2026-01-08 01:44:15.119073 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-08 01:44:15.119079 | orchestrator | module = self._get_module_from_name(name) 2026-01-08 01:44:15.119085 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-08 01:44:15.119091 | orchestrator | __import__(name) 2026-01-08 01:44:15.119097 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-08 01:44:15.119103 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-08 01:44:15.119109 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-08 01:44:15.119115 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-08 01:44:15.119122 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-08 01:44:15.119129 | orchestrator | 2026-01-08 01:44:15.119135 | orchestrator | ================================================================================ 2026-01-08 01:44:15.119141 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-08 01:44:15.586132 | orchestrator | 2026-01-08 01:44:15.586224 | orchestrator | ## DNS (API) 2026-01-08 01:44:15.586234 | orchestrator | 2026-01-08 01:44:15.586240 | orchestrator | + echo 2026-01-08 01:44:15.586246 | orchestrator | + echo '## DNS (API)' 2026-01-08 01:44:15.586253 | orchestrator | + echo 2026-01-08 01:44:15.586259 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-01-08 01:44:15.586268 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-01-08 01:44:15.587859 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-01-08 01:44:15.587964 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-08 01:44:15.589029 | orchestrator | + tee -a /opt/tempest/20260108-0144.log 2026-01-08 01:44:19.466200 | orchestrator | 2026-01-08 01:44:19.466 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-08 01:44:19.562077 | orchestrator | 2026-01-08 01:44:19.562 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:44:19.562143 | orchestrator | 2026-01-08 01:44:19.562 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:44:19.562150 | orchestrator | 2026-01-08 01:44:19.562 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:44:19.562261 | orchestrator | 2026-01-08 01:44:19.562 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:19.562270 | orchestrator | 2026-01-08 01:44:19.563 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:44:19.562275 | orchestrator | 2026-01-08 01:44:19.563 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:44:19.562736 | orchestrator | 2026-01-08 01:44:19.563 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:44:19.562795 | orchestrator | 2026-01-08 01:44:19.563 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:44:19.563205 | orchestrator | 2026-01-08 01:44:19.563 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:44:19.563379 | orchestrator | 2026-01-08 01:44:19.564 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:44:19.563737 | orchestrator | 2026-01-08 01:44:19.564 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:44:19.563891 | orchestrator | 2026-01-08 01:44:19.564 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:44:19.563952 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:44:19.563989 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:44:19.563997 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:19.564006 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:44:19.564600 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:44:19.564652 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:44:19.564661 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:44:19.564670 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:44:19.564679 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:44:19.564687 | orchestrator | 2026-01-08 01:44:19.565 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:44:33.456284 | orchestrator | 2026-01-08 01:44:33.456351 | orchestrator | ========================= 2026-01-08 01:44:33.456358 | orchestrator | Failures during discovery 2026-01-08 01:44:33.456363 | orchestrator | ========================= 2026-01-08 01:44:33.456367 | orchestrator | --- stdout --- 2026-01-08 01:44:33.456372 | orchestrator | 2026-01-08 01:44:23.232 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-08 01:44:33.456377 | orchestrator | 2026-01-08 01:44:23.233 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:44:33.456383 | orchestrator | 2026-01-08 01:44:23.234 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:44:33.456387 | orchestrator | 2026-01-08 01:44:23.234 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:44:33.456392 | orchestrator | 2026-01-08 01:44:23.234 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:33.456396 | orchestrator | 2026-01-08 01:44:23.234 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:44:33.456402 | orchestrator | 2026-01-08 01:44:23.235 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:44:33.456408 | orchestrator | 2026-01-08 01:44:23.235 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:44:33.456417 | orchestrator | 2026-01-08 01:44:23.235 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:44:33.456425 | orchestrator | 2026-01-08 01:44:23.235 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:44:33.456431 | orchestrator | 2026-01-08 01:44:23.236 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:44:33.456437 | orchestrator | 2026-01-08 01:44:23.236 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:44:33.456444 | orchestrator | 2026-01-08 01:44:23.236 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:44:33.456450 | orchestrator | 2026-01-08 01:44:23.236 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:44:33.456456 | orchestrator | 2026-01-08 01:44:23.236 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:44:33.456462 | orchestrator | 2026-01-08 01:44:23.237 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:33.456470 | orchestrator | 2026-01-08 01:44:23.237 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:44:33.456476 | orchestrator | 2026-01-08 01:44:23.237 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:44:33.456483 | orchestrator | 2026-01-08 01:44:23.237 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:44:33.456489 | orchestrator | 2026-01-08 01:44:23.237 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:44:33.456496 | orchestrator | 2026-01-08 01:44:23.237 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:44:33.456503 | orchestrator | 2026-01-08 01:44:23.237 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:44:33.456509 | orchestrator | 2026-01-08 01:44:23.237 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:44:33.456539 | orchestrator | 2026-01-08 01:44:23.240 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-08 01:44:33.456548 | orchestrator | 2026-01-08 01:44:24.063 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-08 01:44:33.456555 | orchestrator | 2026-01-08 01:44:24.064 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-08 01:44:33.456561 | orchestrator | 2026-01-08 01:44:24.064 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-08 01:44:33.456565 | orchestrator | 2026-01-08 01:44:24.064 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:33.456580 | orchestrator | 2026-01-08 01:44:24.064 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-08 01:44:33.456584 | orchestrator | 2026-01-08 01:44:24.064 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-08 01:44:33.456589 | orchestrator | 2026-01-08 01:44:24.064 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-08 01:44:33.456595 | orchestrator | 2026-01-08 01:44:24.064 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-08 01:44:33.456601 | orchestrator | 2026-01-08 01:44:24.064 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-08 01:44:33.456607 | orchestrator | 2026-01-08 01:44:24.065 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-08 01:44:33.456613 | orchestrator | 2026-01-08 01:44:24.065 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-08 01:44:33.456619 | orchestrator | --- import errors --- 2026-01-08 01:44:33.456627 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-08 01:44:33.456633 | orchestrator | Traceback (most recent call last): 2026-01-08 01:44:33.456641 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-08 01:44:33.456647 | orchestrator | module = self._get_module_from_name(name) 2026-01-08 01:44:33.456654 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-08 01:44:33.456661 | orchestrator | __import__(name) 2026-01-08 01:44:33.456667 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-08 01:44:33.456673 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-08 01:44:33.456681 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-08 01:44:33.456685 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-08 01:44:33.456689 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-08 01:44:33.456693 | orchestrator | 2026-01-08 01:44:33.456697 | orchestrator | ================================================================================ 2026-01-08 01:44:33.456701 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-08 01:44:33.930575 | orchestrator | 2026-01-08 01:44:33.930654 | orchestrator | ## OBJECT-STORE (API) 2026-01-08 01:44:33.930665 | orchestrator | 2026-01-08 01:44:33.930672 | orchestrator | + echo 2026-01-08 01:44:33.930677 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-01-08 01:44:33.930682 | orchestrator | + echo 2026-01-08 01:44:33.930687 | orchestrator | + _tempest tempest.api.object_storage 2026-01-08 01:44:33.930692 | orchestrator | + local regex=tempest.api.object_storage 2026-01-08 01:44:33.931156 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-01-08 01:44:33.932400 | orchestrator | ++ date +%Y%m%d-%H%M 2026-01-08 01:44:33.935786 | orchestrator | + tee -a /opt/tempest/20260108-0144.log 2026-01-08 01:44:37.888965 | orchestrator | 2026-01-08 01:44:37.888 1 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf 2026-01-08 01:44:37.986520 | orchestrator | 2026-01-08 01:44:37.986 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:44:37.986587 | orchestrator | 2026-01-08 01:44:37.986 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:44:37.986593 | orchestrator | 2026-01-08 01:44:37.986 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:44:37.986598 | orchestrator | 2026-01-08 01:44:37.986 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:37.986603 | orchestrator | 2026-01-08 01:44:37.987 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:44:37.986608 | orchestrator | 2026-01-08 01:44:37.987 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:44:37.986627 | orchestrator | 2026-01-08 01:44:37.987 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:44:37.987593 | orchestrator | 2026-01-08 01:44:37.987 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:44:37.987629 | orchestrator | 2026-01-08 01:44:37.988 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:44:37.987639 | orchestrator | 2026-01-08 01:44:37.988 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:44:37.987648 | orchestrator | 2026-01-08 01:44:37.988 1 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:44:37.988503 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:44:37.988527 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:44:37.988533 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:44:37.988538 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:37.988753 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:44:37.988776 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:44:37.988781 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:44:37.988785 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:44:37.988790 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:44:37.988812 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:44:37.988817 | orchestrator | 2026-01-08 01:44:37.989 1 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:44:51.021439 | orchestrator | 2026-01-08 01:44:51.021522 | orchestrator | ========================= 2026-01-08 01:44:51.021533 | orchestrator | Failures during discovery 2026-01-08 01:44:51.021540 | orchestrator | ========================= 2026-01-08 01:44:51.021547 | orchestrator | --- stdout --- 2026-01-08 01:44:51.021554 | orchestrator | 2026-01-08 01:44:41.684 10 INFO tempest [-] Using tempest config file /tempest/etc/tempest.conf 2026-01-08 01:44:51.021579 | orchestrator | 2026-01-08 01:44:41.685 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: telemetry_tests 2026-01-08 01:44:51.021588 | orchestrator | 2026-01-08 01:44:41.686 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests 2026-01-08 01:44:51.021594 | orchestrator | 2026-01-08 01:44:41.686 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: glance_tests 2026-01-08 01:44:51.021600 | orchestrator | 2026-01-08 01:44:41.686 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:51.021606 | orchestrator | 2026-01-08 01:44:41.686 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: magnum_tests 2026-01-08 01:44:51.021653 | orchestrator | 2026-01-08 01:44:41.686 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: designate 2026-01-08 01:44:51.021660 | orchestrator | 2026-01-08 01:44:41.687 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: neutron_tests 2026-01-08 01:44:51.021666 | orchestrator | 2026-01-08 01:44:41.687 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: cinder_tests 2026-01-08 01:44:51.021672 | orchestrator | 2026-01-08 01:44:41.687 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests 2026-01-08 01:44:51.021678 | orchestrator | 2026-01-08 01:44:41.687 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: keystone_tests 2026-01-08 01:44:51.021685 | orchestrator | 2026-01-08 01:44:41.688 10 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: ironic_tests 2026-01-08 01:44:51.021690 | orchestrator | 2026-01-08 01:44:41.688 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: telemetry_tests 2026-01-08 01:44:51.021695 | orchestrator | 2026-01-08 01:44:41.688 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests 2026-01-08 01:44:51.021702 | orchestrator | 2026-01-08 01:44:41.688 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: glance_tests 2026-01-08 01:44:51.021707 | orchestrator | 2026-01-08 01:44:41.688 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:51.021714 | orchestrator | 2026-01-08 01:44:41.688 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: magnum_tests 2026-01-08 01:44:51.021721 | orchestrator | 2026-01-08 01:44:41.688 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: designate 2026-01-08 01:44:51.021726 | orchestrator | 2026-01-08 01:44:41.689 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: neutron_tests 2026-01-08 01:44:51.021732 | orchestrator | 2026-01-08 01:44:41.689 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: cinder_tests 2026-01-08 01:44:51.021738 | orchestrator | 2026-01-08 01:44:41.689 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests 2026-01-08 01:44:51.021744 | orchestrator | 2026-01-08 01:44:41.689 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: keystone_tests 2026-01-08 01:44:51.021750 | orchestrator | 2026-01-08 01:44:41.689 10 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: ironic_tests 2026-01-08 01:44:51.021758 | orchestrator | 2026-01-08 01:44:41.692 10 WARNING oslo_config.cfg [-] Deprecated: Option "auth_version" from group "identity" is deprecated for removal (Identity v2 API was removed and v3 is the only available identity API version now). Its value may be silently ignored in the future. 2026-01-08 01:44:51.021766 | orchestrator | 2026-01-08 01:44:42.502 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: telemetry_tests 2026-01-08 01:44:51.021780 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: barbican_tests 2026-01-08 01:44:51.021786 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: glance_tests 2026-01-08 01:44:51.021792 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: octavia-tempest-plugin 2026-01-08 01:44:51.021810 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: magnum_tests 2026-01-08 01:44:51.021816 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: designate 2026-01-08 01:44:51.021823 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: neutron_tests 2026-01-08 01:44:51.021829 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: cinder_tests 2026-01-08 01:44:51.021835 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: manila_tests 2026-01-08 01:44:51.021840 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: keystone_tests 2026-01-08 01:44:51.021846 | orchestrator | 2026-01-08 01:44:42.503 10 INFO tempest.test_discover.plugins [-] Loading tests from Tempest plugin: ironic_tests 2026-01-08 01:44:51.021851 | orchestrator | --- import errors --- 2026-01-08 01:44:51.021858 | orchestrator | Failed to import test module: neutron_tempest_plugin.scenario.test_dns_integration 2026-01-08 01:44:51.021864 | orchestrator | Traceback (most recent call last): 2026-01-08 01:44:51.021880 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 396, in _find_test_path 2026-01-08 01:44:51.021886 | orchestrator | module = self._get_module_from_name(name) 2026-01-08 01:44:51.021892 | orchestrator | File "/usr/local/lib/python3.13/unittest/loader.py", line 339, in _get_module_from_name 2026-01-08 01:44:51.021899 | orchestrator | __import__(name) 2026-01-08 01:44:51.021904 | orchestrator | ~~~~~~~~~~^^^^^^ 2026-01-08 01:44:51.021910 | orchestrator | File "/usr/local/lib/python3.13/site-packages/neutron_tempest_plugin/scenario/test_dns_integration.py", line 40, in 2026-01-08 01:44:51.021916 | orchestrator | dns_base = testtools.try_import('designate_tempest_plugin.tests.base') 2026-01-08 01:44:51.021921 | orchestrator | ^^^^^^^^^^^^^^^^^^^^ 2026-01-08 01:44:51.021927 | orchestrator | AttributeError: module 'testtools' has no attribute 'try_import' 2026-01-08 01:44:51.021933 | orchestrator | 2026-01-08 01:44:51.021939 | orchestrator | ================================================================================ 2026-01-08 01:44:51.021946 | orchestrator | The above traceback was encountered during test discovery which imports all the found test modules in the specified test_path. 2026-01-08 01:44:51.795368 | orchestrator | ok: Runtime: 0:03:37.992737 2026-01-08 01:44:51.821255 | 2026-01-08 01:44:51.821440 | TASK [Check prometheus alert status] 2026-01-08 01:44:52.359807 | orchestrator | skipping: Conditional result was False 2026-01-08 01:44:52.363835 | 2026-01-08 01:44:52.364010 | PLAY RECAP 2026-01-08 01:44:52.364152 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-01-08 01:44:52.364222 | 2026-01-08 01:44:52.617409 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-08 01:44:52.620283 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-08 01:44:53.386141 | 2026-01-08 01:44:53.386329 | PLAY [Post output play] 2026-01-08 01:44:53.404021 | 2026-01-08 01:44:53.404199 | LOOP [stage-output : Register sources] 2026-01-08 01:44:53.488149 | 2026-01-08 01:44:53.488566 | TASK [stage-output : Check sudo] 2026-01-08 01:44:54.345917 | orchestrator | sudo: a password is required 2026-01-08 01:44:54.530651 | orchestrator | ok: Runtime: 0:00:00.009699 2026-01-08 01:44:54.544691 | 2026-01-08 01:44:54.544880 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-08 01:44:54.587681 | 2026-01-08 01:44:54.588071 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-08 01:44:54.658248 | orchestrator | ok 2026-01-08 01:44:54.668117 | 2026-01-08 01:44:54.668298 | LOOP [stage-output : Ensure target folders exist] 2026-01-08 01:44:55.229777 | orchestrator | ok: "docs" 2026-01-08 01:44:55.230160 | 2026-01-08 01:44:56.251735 | orchestrator | ok: "artifacts" 2026-01-08 01:44:56.554720 | orchestrator | ok: "logs" 2026-01-08 01:44:56.578763 | 2026-01-08 01:44:56.579080 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-08 01:44:56.622047 | 2026-01-08 01:44:56.622408 | TASK [stage-output : Make all log files readable] 2026-01-08 01:44:56.990909 | orchestrator | ok 2026-01-08 01:44:57.002584 | 2026-01-08 01:44:57.002755 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-08 01:44:57.039943 | orchestrator | skipping: Conditional result was False 2026-01-08 01:44:57.058744 | 2026-01-08 01:44:57.059014 | TASK [stage-output : Discover log files for compression] 2026-01-08 01:44:57.084481 | orchestrator | skipping: Conditional result was False 2026-01-08 01:44:57.099798 | 2026-01-08 01:44:57.099994 | LOOP [stage-output : Archive everything from logs] 2026-01-08 01:44:57.160460 | 2026-01-08 01:44:57.160717 | PLAY [Post cleanup play] 2026-01-08 01:44:57.173622 | 2026-01-08 01:44:57.173784 | TASK [Set cloud fact (Zuul deployment)] 2026-01-08 01:44:57.259035 | orchestrator | ok 2026-01-08 01:44:57.268652 | 2026-01-08 01:44:57.268790 | TASK [Set cloud fact (local deployment)] 2026-01-08 01:44:57.313400 | orchestrator | skipping: Conditional result was False 2026-01-08 01:44:57.329725 | 2026-01-08 01:44:57.329916 | TASK [Clean the cloud environment] 2026-01-08 01:45:01.438666 | orchestrator | 2026-01-08 01:45:01 - clean up servers 2026-01-08 01:45:02.258471 | orchestrator | 2026-01-08 01:45:02 - testbed-manager 2026-01-08 01:45:02.342009 | orchestrator | 2026-01-08 01:45:02 - testbed-node-2 2026-01-08 01:45:02.423964 | orchestrator | 2026-01-08 01:45:02 - testbed-node-0 2026-01-08 01:45:02.507039 | orchestrator | 2026-01-08 01:45:02 - testbed-node-1 2026-01-08 01:45:02.587594 | orchestrator | 2026-01-08 01:45:02 - testbed-node-5 2026-01-08 01:45:02.673955 | orchestrator | 2026-01-08 01:45:02 - testbed-node-3 2026-01-08 01:45:02.762952 | orchestrator | 2026-01-08 01:45:02 - testbed-node-4 2026-01-08 01:45:02.851567 | orchestrator | 2026-01-08 01:45:02 - clean up keypairs 2026-01-08 01:45:02.874762 | orchestrator | 2026-01-08 01:45:02 - testbed 2026-01-08 01:45:02.904669 | orchestrator | 2026-01-08 01:45:02 - wait for servers to be gone 2026-01-08 01:45:18.435139 | orchestrator | 2026-01-08 01:45:18 - clean up ports 2026-01-08 01:45:18.625055 | orchestrator | 2026-01-08 01:45:18 - 5fe1a5a5-8556-4636-a8dc-1cd976cfeaa1 2026-01-08 01:45:19.080693 | orchestrator | 2026-01-08 01:45:19 - 70a93737-a76f-48b4-868b-9e523ff11fa6 2026-01-08 01:45:19.317351 | orchestrator | 2026-01-08 01:45:19 - 9fa04825-e9d9-41b9-9149-18c532c8d419 2026-01-08 01:45:19.571340 | orchestrator | 2026-01-08 01:45:19 - bad5d18b-17d3-48a8-b762-f8210a77ea89 2026-01-08 01:45:19.779729 | orchestrator | 2026-01-08 01:45:19 - c13ce862-fd87-4731-99ba-3c8fc56fc636 2026-01-08 01:45:19.999775 | orchestrator | 2026-01-08 01:45:19 - e5483431-1898-4ba1-ab58-805a479cd8e4 2026-01-08 01:45:20.210882 | orchestrator | 2026-01-08 01:45:20 - f4963569-02aa-49af-a0a8-51ca33ab1958 2026-01-08 01:45:20.503458 | orchestrator | 2026-01-08 01:45:20 - clean up volumes 2026-01-08 01:45:20.603407 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-2-node-base 2026-01-08 01:45:20.643168 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-1-node-base 2026-01-08 01:45:20.688076 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-3-node-base 2026-01-08 01:45:20.733975 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-manager-base 2026-01-08 01:45:20.780371 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-5-node-base 2026-01-08 01:45:20.823669 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-0-node-base 2026-01-08 01:45:20.865472 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-1-node-4 2026-01-08 01:45:20.909101 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-4-node-base 2026-01-08 01:45:20.952045 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-0-node-3 2026-01-08 01:45:20.995224 | orchestrator | 2026-01-08 01:45:20 - testbed-volume-8-node-5 2026-01-08 01:45:21.036349 | orchestrator | 2026-01-08 01:45:21 - testbed-volume-7-node-4 2026-01-08 01:45:21.077234 | orchestrator | 2026-01-08 01:45:21 - testbed-volume-2-node-5 2026-01-08 01:45:21.120106 | orchestrator | 2026-01-08 01:45:21 - testbed-volume-4-node-4 2026-01-08 01:45:21.162135 | orchestrator | 2026-01-08 01:45:21 - testbed-volume-6-node-3 2026-01-08 01:45:21.207500 | orchestrator | 2026-01-08 01:45:21 - testbed-volume-3-node-3 2026-01-08 01:45:21.249655 | orchestrator | 2026-01-08 01:45:21 - testbed-volume-5-node-5 2026-01-08 01:45:21.290223 | orchestrator | 2026-01-08 01:45:21 - disconnect routers 2026-01-08 01:45:21.360067 | orchestrator | 2026-01-08 01:45:21 - testbed 2026-01-08 01:45:22.386268 | orchestrator | 2026-01-08 01:45:22 - clean up subnets 2026-01-08 01:45:22.434861 | orchestrator | 2026-01-08 01:45:22 - subnet-testbed-management 2026-01-08 01:45:22.610212 | orchestrator | 2026-01-08 01:45:22 - clean up networks 2026-01-08 01:45:22.756855 | orchestrator | 2026-01-08 01:45:22 - net-testbed-management 2026-01-08 01:45:23.053753 | orchestrator | 2026-01-08 01:45:23 - clean up security groups 2026-01-08 01:45:23.104333 | orchestrator | 2026-01-08 01:45:23 - testbed-node 2026-01-08 01:45:23.222107 | orchestrator | 2026-01-08 01:45:23 - testbed-management 2026-01-08 01:45:23.337126 | orchestrator | 2026-01-08 01:45:23 - clean up floating ips 2026-01-08 01:45:23.377817 | orchestrator | 2026-01-08 01:45:23 - 81.163.193.62 2026-01-08 01:45:23.755774 | orchestrator | 2026-01-08 01:45:23 - clean up routers 2026-01-08 01:45:23.874169 | orchestrator | 2026-01-08 01:45:23 - testbed 2026-01-08 01:45:24.899940 | orchestrator | ok: Runtime: 0:00:27.097758 2026-01-08 01:45:24.904573 | 2026-01-08 01:45:24.904765 | PLAY RECAP 2026-01-08 01:45:24.904924 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-08 01:45:24.904989 | 2026-01-08 01:45:25.080740 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-08 01:45:25.081907 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-08 01:45:25.894804 | 2026-01-08 01:45:25.895061 | PLAY [Cleanup play] 2026-01-08 01:45:25.913212 | 2026-01-08 01:45:25.913382 | TASK [Set cloud fact (Zuul deployment)] 2026-01-08 01:45:25.965872 | orchestrator | ok 2026-01-08 01:45:25.973315 | 2026-01-08 01:45:25.973476 | TASK [Set cloud fact (local deployment)] 2026-01-08 01:45:26.008501 | orchestrator | skipping: Conditional result was False 2026-01-08 01:45:26.020697 | 2026-01-08 01:45:26.020892 | TASK [Clean the cloud environment] 2026-01-08 01:45:27.209102 | orchestrator | 2026-01-08 01:45:27 - clean up servers 2026-01-08 01:45:27.793693 | orchestrator | 2026-01-08 01:45:27 - clean up keypairs 2026-01-08 01:45:27.810752 | orchestrator | 2026-01-08 01:45:27 - wait for servers to be gone 2026-01-08 01:45:27.853118 | orchestrator | 2026-01-08 01:45:27 - clean up ports 2026-01-08 01:45:27.925149 | orchestrator | 2026-01-08 01:45:27 - clean up volumes 2026-01-08 01:45:27.986634 | orchestrator | 2026-01-08 01:45:27 - disconnect routers 2026-01-08 01:45:28.011877 | orchestrator | 2026-01-08 01:45:28 - clean up subnets 2026-01-08 01:45:28.031879 | orchestrator | 2026-01-08 01:45:28 - clean up networks 2026-01-08 01:45:28.205254 | orchestrator | 2026-01-08 01:45:28 - clean up security groups 2026-01-08 01:45:28.242521 | orchestrator | 2026-01-08 01:45:28 - clean up floating ips 2026-01-08 01:45:28.272673 | orchestrator | 2026-01-08 01:45:28 - clean up routers 2026-01-08 01:45:28.569308 | orchestrator | ok: Runtime: 0:00:01.478545 2026-01-08 01:45:28.572292 | 2026-01-08 01:45:28.572428 | PLAY RECAP 2026-01-08 01:45:28.572521 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-08 01:45:28.572569 | 2026-01-08 01:45:28.728934 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-08 01:45:28.732787 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-08 01:45:29.537226 | 2026-01-08 01:45:29.537410 | PLAY [Base post-fetch] 2026-01-08 01:45:29.554353 | 2026-01-08 01:45:29.554518 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-08 01:45:29.610222 | orchestrator | skipping: Conditional result was False 2026-01-08 01:45:29.624397 | 2026-01-08 01:45:29.624639 | TASK [fetch-output : Set log path for single node] 2026-01-08 01:45:29.682546 | orchestrator | ok 2026-01-08 01:45:29.691585 | 2026-01-08 01:45:29.691753 | LOOP [fetch-output : Ensure local output dirs] 2026-01-08 01:45:30.195633 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/20443b1ab3f74324ba8bcbc6fdfc2e06/work/logs" 2026-01-08 01:45:30.477978 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/20443b1ab3f74324ba8bcbc6fdfc2e06/work/artifacts" 2026-01-08 01:45:30.765762 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/20443b1ab3f74324ba8bcbc6fdfc2e06/work/docs" 2026-01-08 01:45:30.789579 | 2026-01-08 01:45:30.789847 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-08 01:45:31.763896 | orchestrator | changed: .d..t...... ./ 2026-01-08 01:45:31.764159 | orchestrator | changed: All items complete 2026-01-08 01:45:31.764208 | 2026-01-08 01:45:32.487368 | orchestrator | changed: .d..t...... ./ 2026-01-08 01:45:33.271862 | orchestrator | changed: .d..t...... ./ 2026-01-08 01:45:33.295257 | 2026-01-08 01:45:33.295404 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-08 01:45:33.323753 | orchestrator | skipping: Conditional result was False 2026-01-08 01:45:33.330345 | orchestrator | skipping: Conditional result was False 2026-01-08 01:45:33.357015 | 2026-01-08 01:45:33.357161 | PLAY RECAP 2026-01-08 01:45:33.357250 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-08 01:45:33.357293 | 2026-01-08 01:45:33.507044 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-08 01:45:33.509681 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-08 01:45:34.287561 | 2026-01-08 01:45:34.287736 | PLAY [Base post] 2026-01-08 01:45:34.303379 | 2026-01-08 01:45:34.303546 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-08 01:45:35.382255 | orchestrator | changed 2026-01-08 01:45:35.395075 | 2026-01-08 01:45:35.395264 | PLAY RECAP 2026-01-08 01:45:35.395370 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-08 01:45:35.395477 | 2026-01-08 01:45:35.571260 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-08 01:45:35.573885 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-08 01:45:36.490160 | 2026-01-08 01:45:36.490390 | PLAY [Base post-logs] 2026-01-08 01:45:36.501946 | 2026-01-08 01:45:36.502105 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-08 01:45:37.058240 | localhost | changed 2026-01-08 01:45:37.082299 | 2026-01-08 01:45:37.082502 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-08 01:45:37.136056 | localhost | ok 2026-01-08 01:45:37.145655 | 2026-01-08 01:45:37.145892 | TASK [Set zuul-log-path fact] 2026-01-08 01:45:37.188125 | localhost | ok 2026-01-08 01:45:37.204926 | 2026-01-08 01:45:37.205079 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-08 01:45:37.249357 | localhost | ok 2026-01-08 01:45:37.256866 | 2026-01-08 01:45:37.257035 | TASK [upload-logs : Create log directories] 2026-01-08 01:45:37.826263 | localhost | changed 2026-01-08 01:45:37.833950 | 2026-01-08 01:45:37.834133 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-08 01:45:38.409928 | localhost -> localhost | ok: Runtime: 0:00:00.007563 2026-01-08 01:45:38.417911 | 2026-01-08 01:45:38.418079 | TASK [upload-logs : Upload logs to log server] 2026-01-08 01:45:39.038923 | localhost | Output suppressed because no_log was given 2026-01-08 01:45:39.043341 | 2026-01-08 01:45:39.043520 | LOOP [upload-logs : Compress console log and json output] 2026-01-08 01:45:39.108128 | localhost | skipping: Conditional result was False 2026-01-08 01:45:39.113876 | localhost | skipping: Conditional result was False 2026-01-08 01:45:39.126563 | 2026-01-08 01:45:39.126987 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-08 01:45:39.192284 | localhost | skipping: Conditional result was False 2026-01-08 01:45:39.193132 | 2026-01-08 01:45:39.195089 | localhost | skipping: Conditional result was False 2026-01-08 01:45:39.201219 | 2026-01-08 01:45:39.201411 | LOOP [upload-logs : Upload console log and json output]